00:00:00.001 Started by upstream project "autotest-per-patch" build number 132821 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.093 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.139 Using shallow fetch with depth 1 00:00:00.139 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.139 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.999 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.009 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.020 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.021 > git config core.sparsecheckout # timeout=10 00:00:05.033 > git read-tree -mu HEAD # timeout=10 00:00:05.049 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.069 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.069 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.153 [Pipeline] Start of Pipeline 00:00:05.162 [Pipeline] library 00:00:05.163 Loading library shm_lib@master 00:00:05.163 Library shm_lib@master is cached. Copying from home. 00:00:05.174 [Pipeline] node 00:00:05.191 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.193 [Pipeline] { 00:00:05.200 [Pipeline] catchError 00:00:05.201 [Pipeline] { 00:00:05.213 [Pipeline] wrap 00:00:05.221 [Pipeline] { 00:00:05.225 [Pipeline] stage 00:00:05.226 [Pipeline] { (Prologue) 00:00:05.413 [Pipeline] sh 00:00:05.698 + logger -p user.info -t JENKINS-CI 00:00:05.715 [Pipeline] echo 00:00:05.717 Node: WFP4 00:00:05.725 [Pipeline] sh 00:00:06.023 [Pipeline] setCustomBuildProperty 00:00:06.032 [Pipeline] echo 00:00:06.034 Cleanup processes 00:00:06.037 [Pipeline] sh 00:00:06.317 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.317 350921 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.326 [Pipeline] sh 00:00:06.604 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.604 ++ grep -v 'sudo pgrep' 00:00:06.604 ++ awk '{print $1}' 00:00:06.604 + sudo kill -9 00:00:06.604 + true 00:00:06.617 [Pipeline] cleanWs 00:00:06.625 [WS-CLEANUP] Deleting project workspace... 00:00:06.625 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.630 [WS-CLEANUP] done 00:00:06.633 [Pipeline] setCustomBuildProperty 00:00:06.646 [Pipeline] sh 00:00:06.929 + sudo git config --global --replace-all safe.directory '*' 00:00:07.004 [Pipeline] httpRequest 00:00:07.567 [Pipeline] echo 00:00:07.569 Sorcerer 10.211.164.112 is alive 00:00:07.579 [Pipeline] retry 00:00:07.581 [Pipeline] { 00:00:07.598 [Pipeline] httpRequest 00:00:07.606 HttpMethod: GET 00:00:07.608 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.610 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.626 Response Code: HTTP/1.1 200 OK 00:00:07.626 Success: Status code 200 is in the accepted range: 200,404 00:00:07.626 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.770 [Pipeline] } 00:00:15.788 [Pipeline] // retry 00:00:15.796 [Pipeline] sh 00:00:16.080 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.095 [Pipeline] httpRequest 00:00:16.521 [Pipeline] echo 00:00:16.524 Sorcerer 10.211.164.112 is alive 00:00:16.532 [Pipeline] retry 00:00:16.534 [Pipeline] { 00:00:16.547 [Pipeline] httpRequest 00:00:16.551 HttpMethod: GET 00:00:16.551 URL: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:16.552 Sending request to url: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:16.563 Response Code: HTTP/1.1 200 OK 00:00:16.564 Success: Status code 200 is in the accepted range: 200,404 00:00:16.564 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:01:16.136 [Pipeline] } 00:01:16.153 [Pipeline] // retry 00:01:16.161 [Pipeline] sh 00:01:16.442 + tar --no-same-owner -xf spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:01:18.989 [Pipeline] sh 00:01:19.273 + git -C spdk log --oneline -n5 00:01:19.273 86d35c37a bdev: simplify bdev_reset_freeze_channel 00:01:19.273 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:19.273 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:19.273 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:19.273 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:19.283 [Pipeline] } 00:01:19.295 [Pipeline] // stage 00:01:19.303 [Pipeline] stage 00:01:19.305 [Pipeline] { (Prepare) 00:01:19.319 [Pipeline] writeFile 00:01:19.333 [Pipeline] sh 00:01:19.614 + logger -p user.info -t JENKINS-CI 00:01:19.625 [Pipeline] sh 00:01:19.907 + logger -p user.info -t JENKINS-CI 00:01:19.918 [Pipeline] sh 00:01:20.199 + cat autorun-spdk.conf 00:01:20.199 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.199 SPDK_TEST_NVMF=1 00:01:20.199 SPDK_TEST_NVME_CLI=1 00:01:20.199 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.199 SPDK_TEST_NVMF_NICS=e810 00:01:20.199 SPDK_TEST_VFIOUSER=1 00:01:20.199 SPDK_RUN_UBSAN=1 00:01:20.199 NET_TYPE=phy 00:01:20.206 RUN_NIGHTLY=0 00:01:20.210 [Pipeline] readFile 00:01:20.229 [Pipeline] withEnv 00:01:20.231 [Pipeline] { 00:01:20.242 [Pipeline] sh 00:01:20.524 + set -ex 00:01:20.524 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:20.524 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.524 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.524 ++ SPDK_TEST_NVMF=1 00:01:20.524 ++ SPDK_TEST_NVME_CLI=1 00:01:20.524 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.525 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.525 ++ SPDK_TEST_VFIOUSER=1 00:01:20.525 ++ SPDK_RUN_UBSAN=1 00:01:20.525 ++ NET_TYPE=phy 00:01:20.525 ++ RUN_NIGHTLY=0 00:01:20.525 + case $SPDK_TEST_NVMF_NICS in 00:01:20.525 + DRIVERS=ice 00:01:20.525 + [[ tcp == \r\d\m\a ]] 00:01:20.525 + [[ -n ice ]] 00:01:20.525 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.525 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.525 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.525 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.525 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.525 + true 00:01:20.525 + for D in $DRIVERS 00:01:20.525 + sudo modprobe ice 00:01:20.525 + exit 0 00:01:20.534 [Pipeline] } 00:01:20.547 [Pipeline] // withEnv 00:01:20.551 [Pipeline] } 00:01:20.564 [Pipeline] // stage 00:01:20.573 [Pipeline] catchError 00:01:20.574 [Pipeline] { 00:01:20.586 [Pipeline] timeout 00:01:20.586 Timeout set to expire in 1 hr 0 min 00:01:20.588 [Pipeline] { 00:01:20.602 [Pipeline] stage 00:01:20.603 [Pipeline] { (Tests) 00:01:20.616 [Pipeline] sh 00:01:20.900 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.900 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.900 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.900 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.900 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.900 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.900 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.900 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.900 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.900 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.900 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.900 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.900 + source /etc/os-release 00:01:20.900 ++ NAME='Fedora Linux' 00:01:20.900 ++ VERSION='39 (Cloud Edition)' 00:01:20.900 ++ ID=fedora 00:01:20.900 ++ VERSION_ID=39 00:01:20.900 ++ VERSION_CODENAME= 00:01:20.900 ++ PLATFORM_ID=platform:f39 00:01:20.900 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.900 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.900 ++ LOGO=fedora-logo-icon 00:01:20.900 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.900 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.900 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.900 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.900 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.900 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.900 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.900 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.900 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.900 ++ SUPPORT_END=2024-11-12 00:01:20.900 ++ VARIANT='Cloud Edition' 00:01:20.900 ++ VARIANT_ID=cloud 00:01:20.900 + uname -a 00:01:20.900 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:20.900 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.436 Hugepages 00:01:23.436 node hugesize free / total 00:01:23.436 node0 1048576kB 0 / 0 00:01:23.436 node0 2048kB 0 / 0 00:01:23.436 node1 1048576kB 0 / 0 00:01:23.436 node1 2048kB 0 / 0 00:01:23.436 00:01:23.436 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.436 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:23.436 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:23.436 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:23.436 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:23.436 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:23.436 + rm -f /tmp/spdk-ld-path 00:01:23.436 + source autorun-spdk.conf 00:01:23.436 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.436 ++ SPDK_TEST_NVMF=1 00:01:23.436 ++ SPDK_TEST_NVME_CLI=1 00:01:23.436 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.436 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.436 ++ SPDK_TEST_VFIOUSER=1 00:01:23.436 ++ SPDK_RUN_UBSAN=1 00:01:23.436 ++ NET_TYPE=phy 00:01:23.436 ++ RUN_NIGHTLY=0 00:01:23.436 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.436 + [[ -n '' ]] 00:01:23.436 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.436 + for M in /var/spdk/build-*-manifest.txt 00:01:23.436 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.436 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.436 + for M in /var/spdk/build-*-manifest.txt 00:01:23.436 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.436 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.436 + for M in /var/spdk/build-*-manifest.txt 00:01:23.436 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.436 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.436 ++ uname 00:01:23.436 + [[ Linux == \L\i\n\u\x ]] 00:01:23.436 + sudo dmesg -T 00:01:23.436 + sudo dmesg --clear 00:01:23.695 + dmesg_pid=351863 00:01:23.695 + [[ Fedora Linux == FreeBSD ]] 00:01:23.695 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.695 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.695 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.695 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.695 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.695 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.695 + sudo dmesg -Tw 00:01:23.695 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.695 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.695 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.695 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.695 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.695 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.695 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.695 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.695 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.695 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.695 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.695 04:38:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.695 04:38:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:23.695 04:38:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:23.695 04:38:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.695 04:38:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.695 04:38:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.695 04:38:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.695 04:38:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.695 04:38:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.695 04:38:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.695 04:38:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.696 04:38:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.696 04:38:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.696 04:38:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.696 04:38:14 -- paths/export.sh@5 -- $ export PATH 00:01:23.696 04:38:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.696 04:38:14 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.696 04:38:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.696 04:38:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733801894.XXXXXX 00:01:23.696 04:38:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733801894.YCKb8I 00:01:23.696 04:38:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.696 04:38:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.696 04:38:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:23.696 04:38:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.696 04:38:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.696 04:38:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.696 04:38:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.696 04:38:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.696 04:38:14 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:23.696 04:38:14 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.696 04:38:14 -- pm/common@17 -- $ local monitor 00:01:23.696 04:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.696 04:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.696 04:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.696 04:38:14 -- pm/common@21 -- $ date +%s 00:01:23.696 04:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.696 04:38:14 -- pm/common@21 -- $ date +%s 00:01:23.696 04:38:14 -- pm/common@25 -- $ sleep 1 00:01:23.696 04:38:14 -- pm/common@21 -- $ date +%s 00:01:23.696 04:38:14 -- pm/common@21 -- $ date +%s 00:01:23.696 04:38:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733801894 00:01:23.696 04:38:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733801894 00:01:23.696 04:38:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733801894 00:01:23.696 04:38:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733801894 00:01:23.696 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733801894_collect-vmstat.pm.log 00:01:23.696 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733801894_collect-cpu-load.pm.log 00:01:23.696 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733801894_collect-cpu-temp.pm.log 00:01:23.955 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733801894_collect-bmc-pm.bmc.pm.log 00:01:24.888 04:38:15 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.888 04:38:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.888 04:38:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.888 04:38:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.888 04:38:15 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.888 Tue Dec 10 03:38:15 AM UTC 2024 00:01:24.888 04:38:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.888 v25.01-pre-312-g86d35c37a 00:01:24.888 04:38:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.888 04:38:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.888 04:38:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.888 04:38:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.888 04:38:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.888 04:38:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.888 ************************************ 00:01:24.888 START TEST ubsan 00:01:24.888 ************************************ 00:01:24.888 04:38:15 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.888 using ubsan 00:01:24.888 00:01:24.888 real 0m0.000s 00:01:24.888 user 0m0.000s 00:01:24.888 sys 0m0.000s 00:01:24.888 04:38:15 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.888 04:38:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.888 ************************************ 00:01:24.888 END TEST ubsan 00:01:24.888 ************************************ 00:01:24.888 04:38:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.888 04:38:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.888 04:38:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.888 04:38:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.888 04:38:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.888 04:38:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.888 04:38:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.888 04:38:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.888 04:38:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:24.888 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.888 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.455 Using 'verbs' RDMA provider 00:01:38.224 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:50.436 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:50.436 Creating mk/config.mk...done. 00:01:50.436 Creating mk/cc.flags.mk...done. 00:01:50.436 Type 'make' to build. 00:01:50.436 04:38:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:50.436 04:38:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.436 04:38:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.436 04:38:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.436 ************************************ 00:01:50.436 START TEST make 00:01:50.436 ************************************ 00:01:50.436 04:38:41 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:50.696 make[1]: Nothing to be done for 'all'. 00:01:52.088 The Meson build system 00:01:52.088 Version: 1.5.0 00:01:52.088 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:52.088 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.088 Build type: native build 00:01:52.088 Project name: libvfio-user 00:01:52.088 Project version: 0.0.1 00:01:52.089 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:52.089 C linker for the host machine: cc ld.bfd 2.40-14 00:01:52.089 Host machine cpu family: x86_64 00:01:52.089 Host machine cpu: x86_64 00:01:52.089 Run-time dependency threads found: YES 00:01:52.089 Library dl found: YES 00:01:52.089 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:52.089 Run-time dependency json-c found: YES 0.17 00:01:52.089 Run-time dependency cmocka found: YES 1.1.7 00:01:52.089 Program pytest-3 found: NO 00:01:52.089 Program flake8 found: NO 00:01:52.089 Program misspell-fixer found: NO 00:01:52.089 Program restructuredtext-lint found: NO 00:01:52.089 Program valgrind found: YES (/usr/bin/valgrind) 00:01:52.089 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.089 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.089 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.089 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:52.089 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:52.089 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:52.089 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:52.089 Build targets in project: 8 00:01:52.089 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:52.089 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:52.089 00:01:52.089 libvfio-user 0.0.1 00:01:52.089 00:01:52.089 User defined options 00:01:52.089 buildtype : debug 00:01:52.089 default_library: shared 00:01:52.089 libdir : /usr/local/lib 00:01:52.089 00:01:52.089 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.656 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.915 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:52.915 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:52.915 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:52.915 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:52.915 [5/37] Compiling C object samples/null.p/null.c.o 00:01:52.915 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:52.915 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:52.915 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:52.915 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:52.915 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:52.915 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:52.915 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:52.915 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:52.915 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:52.915 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:52.915 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:52.915 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:52.915 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:52.915 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:52.915 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:52.915 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:52.915 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:52.915 [23/37] Compiling C object samples/client.p/client.c.o 00:01:52.915 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:52.915 [25/37] Compiling C object samples/server.p/server.c.o 00:01:52.915 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:52.915 [27/37] Linking target samples/client 00:01:52.915 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:52.915 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:52.915 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:53.174 [31/37] Linking target test/unit_tests 00:01:53.174 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:53.174 [33/37] Linking target samples/server 00:01:53.174 [34/37] Linking target samples/null 00:01:53.174 [35/37] Linking target samples/lspci 00:01:53.174 [36/37] Linking target samples/gpio-pci-idio-16 00:01:53.174 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:53.174 INFO: autodetecting backend as ninja 00:01:53.174 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.174 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.742 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.742 ninja: no work to do. 00:01:59.013 The Meson build system 00:01:59.013 Version: 1.5.0 00:01:59.013 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:59.013 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:59.013 Build type: native build 00:01:59.013 Program cat found: YES (/usr/bin/cat) 00:01:59.013 Project name: DPDK 00:01:59.013 Project version: 24.03.0 00:01:59.013 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.013 C linker for the host machine: cc ld.bfd 2.40-14 00:01:59.013 Host machine cpu family: x86_64 00:01:59.013 Host machine cpu: x86_64 00:01:59.013 Message: ## Building in Developer Mode ## 00:01:59.013 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.013 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.013 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.013 Program python3 found: YES (/usr/bin/python3) 00:01:59.013 Program cat found: YES (/usr/bin/cat) 00:01:59.013 Compiler for C supports arguments -march=native: YES 00:01:59.013 Checking for size of "void *" : 8 00:01:59.013 Checking for size of "void *" : 8 (cached) 00:01:59.013 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:59.013 Library m found: YES 00:01:59.013 Library numa found: YES 00:01:59.013 Has header "numaif.h" : YES 00:01:59.013 Library fdt found: NO 00:01:59.013 Library execinfo found: NO 00:01:59.013 Has header "execinfo.h" : YES 00:01:59.013 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.013 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.013 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.014 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.014 Run-time dependency openssl found: YES 3.1.1 00:01:59.014 Run-time dependency libpcap found: YES 1.10.4 00:01:59.014 Has header "pcap.h" with dependency libpcap: YES 00:01:59.014 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.014 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.014 Compiler for C supports arguments -Wformat: YES 00:01:59.014 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.014 Compiler for C supports arguments -Wformat-security: NO 00:01:59.014 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.014 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.014 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.014 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.014 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.014 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.014 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.014 Compiler for C supports arguments -Wundef: YES 00:01:59.014 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.014 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.014 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.014 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.014 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.014 Program objdump found: YES (/usr/bin/objdump) 00:01:59.014 Compiler for C supports arguments -mavx512f: YES 00:01:59.014 Checking if "AVX512 checking" compiles: YES 00:01:59.014 Fetching value of define "__SSE4_2__" : 1 00:01:59.014 Fetching value of define "__AES__" : 1 00:01:59.014 Fetching value of define "__AVX__" : 1 00:01:59.014 Fetching value of define "__AVX2__" : 1 00:01:59.014 Fetching value of define "__AVX512BW__" : 1 00:01:59.014 Fetching value of define "__AVX512CD__" : 1 00:01:59.014 Fetching value of define "__AVX512DQ__" : 1 00:01:59.014 Fetching value of define "__AVX512F__" : 1 00:01:59.014 Fetching value of define "__AVX512VL__" : 1 00:01:59.014 Fetching value of define "__PCLMUL__" : 1 00:01:59.014 Fetching value of define "__RDRND__" : 1 00:01:59.014 Fetching value of define "__RDSEED__" : 1 00:01:59.014 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.014 Fetching value of define "__znver1__" : (undefined) 00:01:59.014 Fetching value of define "__znver2__" : (undefined) 00:01:59.014 Fetching value of define "__znver3__" : (undefined) 00:01:59.014 Fetching value of define "__znver4__" : (undefined) 00:01:59.014 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.014 Message: lib/log: Defining dependency "log" 00:01:59.014 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.014 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.014 Checking for function "getentropy" : NO 00:01:59.014 Message: lib/eal: Defining dependency "eal" 00:01:59.014 Message: lib/ring: Defining dependency "ring" 00:01:59.014 Message: lib/rcu: Defining dependency "rcu" 00:01:59.014 Message: lib/mempool: Defining dependency "mempool" 00:01:59.014 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.014 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.014 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.014 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.014 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.014 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.014 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.014 Compiler for C supports arguments -mpclmul: YES 00:01:59.014 Compiler for C supports arguments -maes: YES 00:01:59.014 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.014 Compiler for C supports arguments -mavx512bw: YES 00:01:59.014 Compiler for C supports arguments -mavx512dq: YES 00:01:59.014 Compiler for C supports arguments -mavx512vl: YES 00:01:59.014 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.014 Compiler for C supports arguments -mavx2: YES 00:01:59.014 Compiler for C supports arguments -mavx: YES 00:01:59.014 Message: lib/net: Defining dependency "net" 00:01:59.014 Message: lib/meter: Defining dependency "meter" 00:01:59.014 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.014 Message: lib/pci: Defining dependency "pci" 00:01:59.014 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.014 Message: lib/hash: Defining dependency "hash" 00:01:59.014 Message: lib/timer: Defining dependency "timer" 00:01:59.014 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.014 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.014 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.014 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.014 Message: lib/power: Defining dependency "power" 00:01:59.014 Message: lib/reorder: Defining dependency "reorder" 00:01:59.014 Message: lib/security: Defining dependency "security" 00:01:59.014 Has header "linux/userfaultfd.h" : YES 00:01:59.014 Has header "linux/vduse.h" : YES 00:01:59.014 Message: lib/vhost: Defining dependency "vhost" 00:01:59.014 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.014 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.014 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.014 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.014 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.014 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.014 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.014 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.014 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.014 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.014 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:59.014 Configuring doxy-api-html.conf using configuration 00:01:59.014 Configuring doxy-api-man.conf using configuration 00:01:59.014 Program mandb found: YES (/usr/bin/mandb) 00:01:59.014 Program sphinx-build found: NO 00:01:59.014 Configuring rte_build_config.h using configuration 00:01:59.014 Message: 00:01:59.014 ================= 00:01:59.014 Applications Enabled 00:01:59.014 ================= 00:01:59.014 00:01:59.014 apps: 00:01:59.014 00:01:59.014 00:01:59.014 Message: 00:01:59.014 ================= 00:01:59.014 Libraries Enabled 00:01:59.014 ================= 00:01:59.014 00:01:59.014 libs: 00:01:59.014 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.014 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.014 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.014 00:01:59.014 Message: 00:01:59.014 =============== 00:01:59.014 Drivers Enabled 00:01:59.014 =============== 00:01:59.014 00:01:59.014 common: 00:01:59.014 00:01:59.014 bus: 00:01:59.014 pci, vdev, 00:01:59.014 mempool: 00:01:59.014 ring, 00:01:59.014 dma: 00:01:59.014 00:01:59.014 net: 00:01:59.014 00:01:59.014 crypto: 00:01:59.014 00:01:59.014 compress: 00:01:59.014 00:01:59.014 vdpa: 00:01:59.014 00:01:59.014 00:01:59.014 Message: 00:01:59.014 ================= 00:01:59.014 Content Skipped 00:01:59.014 ================= 00:01:59.014 00:01:59.014 apps: 00:01:59.014 dumpcap: explicitly disabled via build config 00:01:59.014 graph: explicitly disabled via build config 00:01:59.014 pdump: explicitly disabled via build config 00:01:59.014 proc-info: explicitly disabled via build config 00:01:59.014 test-acl: explicitly disabled via build config 00:01:59.014 test-bbdev: explicitly disabled via build config 00:01:59.014 test-cmdline: explicitly disabled via build config 00:01:59.014 test-compress-perf: explicitly disabled via build config 00:01:59.014 test-crypto-perf: explicitly disabled via build config 00:01:59.014 test-dma-perf: explicitly disabled via build config 00:01:59.014 test-eventdev: explicitly disabled via build config 00:01:59.014 test-fib: explicitly disabled via build config 00:01:59.014 test-flow-perf: explicitly disabled via build config 00:01:59.014 test-gpudev: explicitly disabled via build config 00:01:59.014 test-mldev: explicitly disabled via build config 00:01:59.014 test-pipeline: explicitly disabled via build config 00:01:59.014 test-pmd: explicitly disabled via build config 00:01:59.014 test-regex: explicitly disabled via build config 00:01:59.014 test-sad: explicitly disabled via build config 00:01:59.014 test-security-perf: explicitly disabled via build config 00:01:59.014 00:01:59.014 libs: 00:01:59.014 argparse: explicitly disabled via build config 00:01:59.014 metrics: explicitly disabled via build config 00:01:59.014 acl: explicitly disabled via build config 00:01:59.014 bbdev: explicitly disabled via build config 00:01:59.014 bitratestats: explicitly disabled via build config 00:01:59.014 bpf: explicitly disabled via build config 00:01:59.014 cfgfile: explicitly disabled via build config 00:01:59.014 distributor: explicitly disabled via build config 00:01:59.014 efd: explicitly disabled via build config 00:01:59.014 eventdev: explicitly disabled via build config 00:01:59.014 dispatcher: explicitly disabled via build config 00:01:59.014 gpudev: explicitly disabled via build config 00:01:59.014 gro: explicitly disabled via build config 00:01:59.014 gso: explicitly disabled via build config 00:01:59.014 ip_frag: explicitly disabled via build config 00:01:59.014 jobstats: explicitly disabled via build config 00:01:59.014 latencystats: explicitly disabled via build config 00:01:59.014 lpm: explicitly disabled via build config 00:01:59.014 member: explicitly disabled via build config 00:01:59.014 pcapng: explicitly disabled via build config 00:01:59.014 rawdev: explicitly disabled via build config 00:01:59.014 regexdev: explicitly disabled via build config 00:01:59.014 mldev: explicitly disabled via build config 00:01:59.014 rib: explicitly disabled via build config 00:01:59.014 sched: explicitly disabled via build config 00:01:59.014 stack: explicitly disabled via build config 00:01:59.014 ipsec: explicitly disabled via build config 00:01:59.015 pdcp: explicitly disabled via build config 00:01:59.015 fib: explicitly disabled via build config 00:01:59.015 port: explicitly disabled via build config 00:01:59.015 pdump: explicitly disabled via build config 00:01:59.015 table: explicitly disabled via build config 00:01:59.015 pipeline: explicitly disabled via build config 00:01:59.015 graph: explicitly disabled via build config 00:01:59.015 node: explicitly disabled via build config 00:01:59.015 00:01:59.015 drivers: 00:01:59.015 common/cpt: not in enabled drivers build config 00:01:59.015 common/dpaax: not in enabled drivers build config 00:01:59.015 common/iavf: not in enabled drivers build config 00:01:59.015 common/idpf: not in enabled drivers build config 00:01:59.015 common/ionic: not in enabled drivers build config 00:01:59.015 common/mvep: not in enabled drivers build config 00:01:59.015 common/octeontx: not in enabled drivers build config 00:01:59.015 bus/auxiliary: not in enabled drivers build config 00:01:59.015 bus/cdx: not in enabled drivers build config 00:01:59.015 bus/dpaa: not in enabled drivers build config 00:01:59.015 bus/fslmc: not in enabled drivers build config 00:01:59.015 bus/ifpga: not in enabled drivers build config 00:01:59.015 bus/platform: not in enabled drivers build config 00:01:59.015 bus/uacce: not in enabled drivers build config 00:01:59.015 bus/vmbus: not in enabled drivers build config 00:01:59.015 common/cnxk: not in enabled drivers build config 00:01:59.015 common/mlx5: not in enabled drivers build config 00:01:59.015 common/nfp: not in enabled drivers build config 00:01:59.015 common/nitrox: not in enabled drivers build config 00:01:59.015 common/qat: not in enabled drivers build config 00:01:59.015 common/sfc_efx: not in enabled drivers build config 00:01:59.015 mempool/bucket: not in enabled drivers build config 00:01:59.015 mempool/cnxk: not in enabled drivers build config 00:01:59.015 mempool/dpaa: not in enabled drivers build config 00:01:59.015 mempool/dpaa2: not in enabled drivers build config 00:01:59.015 mempool/octeontx: not in enabled drivers build config 00:01:59.015 mempool/stack: not in enabled drivers build config 00:01:59.015 dma/cnxk: not in enabled drivers build config 00:01:59.015 dma/dpaa: not in enabled drivers build config 00:01:59.015 dma/dpaa2: not in enabled drivers build config 00:01:59.015 dma/hisilicon: not in enabled drivers build config 00:01:59.015 dma/idxd: not in enabled drivers build config 00:01:59.015 dma/ioat: not in enabled drivers build config 00:01:59.015 dma/skeleton: not in enabled drivers build config 00:01:59.015 net/af_packet: not in enabled drivers build config 00:01:59.015 net/af_xdp: not in enabled drivers build config 00:01:59.015 net/ark: not in enabled drivers build config 00:01:59.015 net/atlantic: not in enabled drivers build config 00:01:59.015 net/avp: not in enabled drivers build config 00:01:59.015 net/axgbe: not in enabled drivers build config 00:01:59.015 net/bnx2x: not in enabled drivers build config 00:01:59.015 net/bnxt: not in enabled drivers build config 00:01:59.015 net/bonding: not in enabled drivers build config 00:01:59.015 net/cnxk: not in enabled drivers build config 00:01:59.015 net/cpfl: not in enabled drivers build config 00:01:59.015 net/cxgbe: not in enabled drivers build config 00:01:59.015 net/dpaa: not in enabled drivers build config 00:01:59.015 net/dpaa2: not in enabled drivers build config 00:01:59.015 net/e1000: not in enabled drivers build config 00:01:59.015 net/ena: not in enabled drivers build config 00:01:59.015 net/enetc: not in enabled drivers build config 00:01:59.015 net/enetfec: not in enabled drivers build config 00:01:59.015 net/enic: not in enabled drivers build config 00:01:59.015 net/failsafe: not in enabled drivers build config 00:01:59.015 net/fm10k: not in enabled drivers build config 00:01:59.015 net/gve: not in enabled drivers build config 00:01:59.015 net/hinic: not in enabled drivers build config 00:01:59.015 net/hns3: not in enabled drivers build config 00:01:59.015 net/i40e: not in enabled drivers build config 00:01:59.015 net/iavf: not in enabled drivers build config 00:01:59.015 net/ice: not in enabled drivers build config 00:01:59.015 net/idpf: not in enabled drivers build config 00:01:59.015 net/igc: not in enabled drivers build config 00:01:59.015 net/ionic: not in enabled drivers build config 00:01:59.015 net/ipn3ke: not in enabled drivers build config 00:01:59.015 net/ixgbe: not in enabled drivers build config 00:01:59.015 net/mana: not in enabled drivers build config 00:01:59.015 net/memif: not in enabled drivers build config 00:01:59.015 net/mlx4: not in enabled drivers build config 00:01:59.015 net/mlx5: not in enabled drivers build config 00:01:59.015 net/mvneta: not in enabled drivers build config 00:01:59.015 net/mvpp2: not in enabled drivers build config 00:01:59.015 net/netvsc: not in enabled drivers build config 00:01:59.015 net/nfb: not in enabled drivers build config 00:01:59.015 net/nfp: not in enabled drivers build config 00:01:59.015 net/ngbe: not in enabled drivers build config 00:01:59.015 net/null: not in enabled drivers build config 00:01:59.015 net/octeontx: not in enabled drivers build config 00:01:59.015 net/octeon_ep: not in enabled drivers build config 00:01:59.015 net/pcap: not in enabled drivers build config 00:01:59.015 net/pfe: not in enabled drivers build config 00:01:59.015 net/qede: not in enabled drivers build config 00:01:59.015 net/ring: not in enabled drivers build config 00:01:59.015 net/sfc: not in enabled drivers build config 00:01:59.015 net/softnic: not in enabled drivers build config 00:01:59.015 net/tap: not in enabled drivers build config 00:01:59.015 net/thunderx: not in enabled drivers build config 00:01:59.015 net/txgbe: not in enabled drivers build config 00:01:59.015 net/vdev_netvsc: not in enabled drivers build config 00:01:59.015 net/vhost: not in enabled drivers build config 00:01:59.015 net/virtio: not in enabled drivers build config 00:01:59.015 net/vmxnet3: not in enabled drivers build config 00:01:59.015 raw/*: missing internal dependency, "rawdev" 00:01:59.015 crypto/armv8: not in enabled drivers build config 00:01:59.015 crypto/bcmfs: not in enabled drivers build config 00:01:59.015 crypto/caam_jr: not in enabled drivers build config 00:01:59.015 crypto/ccp: not in enabled drivers build config 00:01:59.015 crypto/cnxk: not in enabled drivers build config 00:01:59.015 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.015 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.015 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.015 crypto/mlx5: not in enabled drivers build config 00:01:59.015 crypto/mvsam: not in enabled drivers build config 00:01:59.015 crypto/nitrox: not in enabled drivers build config 00:01:59.015 crypto/null: not in enabled drivers build config 00:01:59.015 crypto/octeontx: not in enabled drivers build config 00:01:59.015 crypto/openssl: not in enabled drivers build config 00:01:59.015 crypto/scheduler: not in enabled drivers build config 00:01:59.015 crypto/uadk: not in enabled drivers build config 00:01:59.015 crypto/virtio: not in enabled drivers build config 00:01:59.015 compress/isal: not in enabled drivers build config 00:01:59.015 compress/mlx5: not in enabled drivers build config 00:01:59.015 compress/nitrox: not in enabled drivers build config 00:01:59.015 compress/octeontx: not in enabled drivers build config 00:01:59.015 compress/zlib: not in enabled drivers build config 00:01:59.015 regex/*: missing internal dependency, "regexdev" 00:01:59.015 ml/*: missing internal dependency, "mldev" 00:01:59.015 vdpa/ifc: not in enabled drivers build config 00:01:59.015 vdpa/mlx5: not in enabled drivers build config 00:01:59.015 vdpa/nfp: not in enabled drivers build config 00:01:59.015 vdpa/sfc: not in enabled drivers build config 00:01:59.015 event/*: missing internal dependency, "eventdev" 00:01:59.015 baseband/*: missing internal dependency, "bbdev" 00:01:59.015 gpu/*: missing internal dependency, "gpudev" 00:01:59.015 00:01:59.015 00:01:59.015 Build targets in project: 85 00:01:59.015 00:01:59.015 DPDK 24.03.0 00:01:59.015 00:01:59.015 User defined options 00:01:59.015 buildtype : debug 00:01:59.015 default_library : shared 00:01:59.015 libdir : lib 00:01:59.015 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:59.015 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.015 c_link_args : 00:01:59.015 cpu_instruction_set: native 00:01:59.015 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:59.015 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:59.015 enable_docs : false 00:01:59.015 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:59.015 enable_kmods : false 00:01:59.015 max_lcores : 128 00:01:59.015 tests : false 00:01:59.015 00:01:59.015 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.280 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:59.280 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.280 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.280 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.281 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.281 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.281 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.281 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.281 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.281 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.281 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:59.540 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.540 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.540 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:59.540 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.540 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:59.540 [16/268] Linking static target lib/librte_kvargs.a 00:01:59.540 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.540 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.540 [19/268] Linking static target lib/librte_log.a 00:01:59.540 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:59.540 [21/268] Linking static target lib/librte_pci.a 00:01:59.540 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:59.809 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:59.809 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:59.809 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:59.809 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.809 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.809 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.809 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.809 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:59.809 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.809 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.809 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:59.809 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:59.809 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.809 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.809 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.809 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.809 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.809 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.809 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.809 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:59.809 [43/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:59.809 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.809 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.809 [46/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.068 [47/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.068 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.068 [49/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.068 [50/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.068 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.068 [52/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.068 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.068 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.068 [55/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.068 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.068 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.068 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.068 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.068 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.068 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.068 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.068 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.068 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.068 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.069 [66/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.069 [67/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.069 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.069 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.069 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.069 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.069 [72/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.069 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.069 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.069 [75/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.069 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.069 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.069 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.069 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.069 [80/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.069 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.069 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.069 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:00.069 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.069 [85/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.069 [86/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.069 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.069 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.069 [89/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.069 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.069 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.069 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.069 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:00.069 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.069 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.069 [96/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:00.069 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.069 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.069 [99/268] Linking static target lib/librte_meter.a 00:02:00.069 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.069 [101/268] Linking static target lib/librte_net.a 00:02:00.069 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.069 [103/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.069 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.069 [105/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.069 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.069 [107/268] Linking static target lib/librte_telemetry.a 00:02:00.069 [108/268] Linking static target lib/librte_ring.a 00:02:00.069 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.069 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.069 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.069 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.069 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:00.069 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.069 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.069 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.327 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.327 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.327 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.328 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.328 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.328 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.328 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:00.328 [124/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.328 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.328 [126/268] Linking static target lib/librte_eal.a 00:02:00.328 [127/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.328 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.328 [129/268] Linking static target lib/librte_mempool.a 00:02:00.328 [130/268] Linking static target lib/librte_rcu.a 00:02:00.328 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.328 [132/268] Linking static target lib/librte_cmdline.a 00:02:00.328 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.328 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.328 [135/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.328 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:00.328 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:00.328 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.328 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.328 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.328 [141/268] Linking target lib/librte_log.so.24.1 00:02:00.328 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.328 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.328 [144/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:00.328 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:00.328 [146/268] Linking static target lib/librte_mbuf.a 00:02:00.328 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.328 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.328 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.328 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.328 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.328 [152/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.328 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:00.328 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:00.328 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:00.328 [156/268] Linking static target lib/librte_dmadev.a 00:02:00.586 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:00.586 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.586 [159/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.586 [160/268] Linking static target lib/librte_timer.a 00:02:00.586 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.586 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.586 [163/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.586 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.586 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:00.586 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.586 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:00.586 [168/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.586 [169/268] Linking static target lib/librte_reorder.a 00:02:00.586 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.586 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.586 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.586 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.586 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:00.586 [175/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.586 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:00.586 [177/268] Linking static target lib/librte_compressdev.a 00:02:00.586 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.586 [179/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:00.586 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:00.586 [181/268] Linking target lib/librte_telemetry.so.24.1 00:02:00.586 [182/268] Linking target lib/librte_kvargs.so.24.1 00:02:00.586 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.586 [184/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.586 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:00.586 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:00.586 [187/268] Linking static target lib/librte_security.a 00:02:00.586 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:00.586 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.586 [190/268] Linking static target lib/librte_power.a 00:02:00.586 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.586 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:00.845 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:00.845 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.845 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.845 [196/268] Linking static target drivers/librte_bus_vdev.a 00:02:00.845 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.845 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.845 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.845 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.845 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.845 [202/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:00.845 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.845 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.845 [205/268] Linking static target lib/librte_hash.a 00:02:00.845 [206/268] Linking static target drivers/librte_bus_pci.a 00:02:00.845 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.845 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.103 [209/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.103 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.103 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.103 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.103 [213/268] Linking static target drivers/librte_mempool_ring.a 00:02:01.103 [214/268] Linking static target lib/librte_cryptodev.a 00:02:01.103 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.103 [216/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.103 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.103 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.103 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.103 [220/268] Linking static target lib/librte_ethdev.a 00:02:01.360 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.360 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.360 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.619 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.619 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.619 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.619 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.553 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.553 [229/268] Linking static target lib/librte_vhost.a 00:02:02.810 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.780 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.057 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.316 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.316 [234/268] Linking target lib/librte_eal.so.24.1 00:02:10.575 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:10.575 [236/268] Linking target lib/librte_meter.so.24.1 00:02:10.575 [237/268] Linking target lib/librte_pci.so.24.1 00:02:10.575 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:10.575 [239/268] Linking target lib/librte_ring.so.24.1 00:02:10.575 [240/268] Linking target lib/librte_timer.so.24.1 00:02:10.575 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:10.834 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:10.834 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:10.834 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:10.834 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:10.834 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:10.834 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:10.834 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:10.834 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:10.834 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:10.834 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:10.834 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:10.834 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:11.093 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:11.093 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:11.094 [256/268] Linking target lib/librte_net.so.24.1 00:02:11.094 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:11.094 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:11.353 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:11.353 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:11.353 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:11.353 [262/268] Linking target lib/librte_hash.so.24.1 00:02:11.353 [263/268] Linking target lib/librte_security.so.24.1 00:02:11.353 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:11.353 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:11.353 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:11.612 [267/268] Linking target lib/librte_power.so.24.1 00:02:11.612 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:11.612 INFO: autodetecting backend as ninja 00:02:11.612 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:21.667 CC lib/ut/ut.o 00:02:21.667 CC lib/log/log.o 00:02:21.667 CC lib/ut_mock/mock.o 00:02:21.667 CC lib/log/log_flags.o 00:02:21.667 CC lib/log/log_deprecated.o 00:02:21.926 LIB libspdk_log.a 00:02:21.926 LIB libspdk_ut_mock.a 00:02:21.926 LIB libspdk_ut.a 00:02:21.926 SO libspdk_log.so.7.1 00:02:21.926 SO libspdk_ut.so.2.0 00:02:21.926 SO libspdk_ut_mock.so.6.0 00:02:21.926 SYMLINK libspdk_ut.so 00:02:21.926 SYMLINK libspdk_log.so 00:02:21.926 SYMLINK libspdk_ut_mock.so 00:02:22.493 CC lib/dma/dma.o 00:02:22.493 CC lib/ioat/ioat.o 00:02:22.493 CXX lib/trace_parser/trace.o 00:02:22.493 CC lib/util/base64.o 00:02:22.493 CC lib/util/bit_array.o 00:02:22.493 CC lib/util/cpuset.o 00:02:22.493 CC lib/util/crc16.o 00:02:22.493 CC lib/util/crc32.o 00:02:22.493 CC lib/util/crc32c.o 00:02:22.493 CC lib/util/crc32_ieee.o 00:02:22.493 CC lib/util/crc64.o 00:02:22.493 CC lib/util/dif.o 00:02:22.493 CC lib/util/fd.o 00:02:22.493 CC lib/util/fd_group.o 00:02:22.493 CC lib/util/file.o 00:02:22.493 CC lib/util/hexlify.o 00:02:22.493 CC lib/util/iov.o 00:02:22.493 CC lib/util/math.o 00:02:22.493 CC lib/util/net.o 00:02:22.493 CC lib/util/pipe.o 00:02:22.493 CC lib/util/strerror_tls.o 00:02:22.493 CC lib/util/string.o 00:02:22.493 CC lib/util/uuid.o 00:02:22.493 CC lib/util/xor.o 00:02:22.493 CC lib/util/zipf.o 00:02:22.493 CC lib/util/md5.o 00:02:22.493 CC lib/vfio_user/host/vfio_user.o 00:02:22.493 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.493 LIB libspdk_dma.a 00:02:22.493 SO libspdk_dma.so.5.0 00:02:22.752 SYMLINK libspdk_dma.so 00:02:22.752 LIB libspdk_ioat.a 00:02:22.752 SO libspdk_ioat.so.7.0 00:02:22.752 SYMLINK libspdk_ioat.so 00:02:22.752 LIB libspdk_vfio_user.a 00:02:22.752 SO libspdk_vfio_user.so.5.0 00:02:22.752 SYMLINK libspdk_vfio_user.so 00:02:22.752 LIB libspdk_util.a 00:02:23.011 SO libspdk_util.so.10.1 00:02:23.011 SYMLINK libspdk_util.so 00:02:23.011 LIB libspdk_trace_parser.a 00:02:23.011 SO libspdk_trace_parser.so.6.0 00:02:23.270 SYMLINK libspdk_trace_parser.so 00:02:23.270 CC lib/json/json_parse.o 00:02:23.270 CC lib/json/json_util.o 00:02:23.270 CC lib/json/json_write.o 00:02:23.270 CC lib/conf/conf.o 00:02:23.270 CC lib/rdma_utils/rdma_utils.o 00:02:23.270 CC lib/vmd/vmd.o 00:02:23.270 CC lib/vmd/led.o 00:02:23.270 CC lib/env_dpdk/env.o 00:02:23.270 CC lib/idxd/idxd.o 00:02:23.270 CC lib/env_dpdk/memory.o 00:02:23.270 CC lib/idxd/idxd_user.o 00:02:23.270 CC lib/env_dpdk/pci.o 00:02:23.270 CC lib/idxd/idxd_kernel.o 00:02:23.270 CC lib/env_dpdk/init.o 00:02:23.270 CC lib/env_dpdk/threads.o 00:02:23.270 CC lib/env_dpdk/pci_ioat.o 00:02:23.270 CC lib/env_dpdk/pci_virtio.o 00:02:23.270 CC lib/env_dpdk/pci_vmd.o 00:02:23.270 CC lib/env_dpdk/pci_idxd.o 00:02:23.270 CC lib/env_dpdk/pci_event.o 00:02:23.270 CC lib/env_dpdk/sigbus_handler.o 00:02:23.270 CC lib/env_dpdk/pci_dpdk.o 00:02:23.270 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.270 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.529 LIB libspdk_conf.a 00:02:23.529 SO libspdk_conf.so.6.0 00:02:23.529 LIB libspdk_rdma_utils.a 00:02:23.788 LIB libspdk_json.a 00:02:23.788 SO libspdk_rdma_utils.so.1.0 00:02:23.788 SYMLINK libspdk_conf.so 00:02:23.788 SO libspdk_json.so.6.0 00:02:23.788 SYMLINK libspdk_rdma_utils.so 00:02:23.788 SYMLINK libspdk_json.so 00:02:23.788 LIB libspdk_idxd.a 00:02:23.788 SO libspdk_idxd.so.12.1 00:02:23.788 LIB libspdk_vmd.a 00:02:24.050 SO libspdk_vmd.so.6.0 00:02:24.050 SYMLINK libspdk_idxd.so 00:02:24.050 CC lib/rdma_provider/common.o 00:02:24.050 SYMLINK libspdk_vmd.so 00:02:24.050 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:24.050 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.050 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.050 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.050 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.308 LIB libspdk_rdma_provider.a 00:02:24.308 SO libspdk_rdma_provider.so.7.0 00:02:24.308 LIB libspdk_jsonrpc.a 00:02:24.308 SYMLINK libspdk_rdma_provider.so 00:02:24.308 SO libspdk_jsonrpc.so.6.0 00:02:24.308 SYMLINK libspdk_jsonrpc.so 00:02:24.308 LIB libspdk_env_dpdk.a 00:02:24.566 SO libspdk_env_dpdk.so.15.1 00:02:24.566 SYMLINK libspdk_env_dpdk.so 00:02:24.566 CC lib/rpc/rpc.o 00:02:24.825 LIB libspdk_rpc.a 00:02:24.825 SO libspdk_rpc.so.6.0 00:02:25.084 SYMLINK libspdk_rpc.so 00:02:25.344 CC lib/notify/notify.o 00:02:25.344 CC lib/notify/notify_rpc.o 00:02:25.344 CC lib/trace/trace.o 00:02:25.344 CC lib/trace/trace_flags.o 00:02:25.344 CC lib/trace/trace_rpc.o 00:02:25.344 CC lib/keyring/keyring.o 00:02:25.344 CC lib/keyring/keyring_rpc.o 00:02:25.344 LIB libspdk_notify.a 00:02:25.344 SO libspdk_notify.so.6.0 00:02:25.603 LIB libspdk_trace.a 00:02:25.603 LIB libspdk_keyring.a 00:02:25.603 SO libspdk_trace.so.11.0 00:02:25.603 SYMLINK libspdk_notify.so 00:02:25.603 SO libspdk_keyring.so.2.0 00:02:25.603 SYMLINK libspdk_trace.so 00:02:25.603 SYMLINK libspdk_keyring.so 00:02:25.862 CC lib/sock/sock.o 00:02:25.862 CC lib/sock/sock_rpc.o 00:02:25.862 CC lib/thread/thread.o 00:02:25.862 CC lib/thread/iobuf.o 00:02:26.121 LIB libspdk_sock.a 00:02:26.121 SO libspdk_sock.so.10.0 00:02:26.380 SYMLINK libspdk_sock.so 00:02:26.639 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.639 CC lib/nvme/nvme_ctrlr.o 00:02:26.639 CC lib/nvme/nvme_fabric.o 00:02:26.639 CC lib/nvme/nvme_ns_cmd.o 00:02:26.639 CC lib/nvme/nvme_ns.o 00:02:26.639 CC lib/nvme/nvme_pcie_common.o 00:02:26.639 CC lib/nvme/nvme_pcie.o 00:02:26.639 CC lib/nvme/nvme_qpair.o 00:02:26.639 CC lib/nvme/nvme.o 00:02:26.639 CC lib/nvme/nvme_quirks.o 00:02:26.639 CC lib/nvme/nvme_transport.o 00:02:26.639 CC lib/nvme/nvme_discovery.o 00:02:26.639 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.639 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.639 CC lib/nvme/nvme_tcp.o 00:02:26.639 CC lib/nvme/nvme_opal.o 00:02:26.639 CC lib/nvme/nvme_io_msg.o 00:02:26.639 CC lib/nvme/nvme_poll_group.o 00:02:26.639 CC lib/nvme/nvme_zns.o 00:02:26.639 CC lib/nvme/nvme_stubs.o 00:02:26.639 CC lib/nvme/nvme_auth.o 00:02:26.639 CC lib/nvme/nvme_cuse.o 00:02:26.639 CC lib/nvme/nvme_vfio_user.o 00:02:26.639 CC lib/nvme/nvme_rdma.o 00:02:26.897 LIB libspdk_thread.a 00:02:26.897 SO libspdk_thread.so.11.0 00:02:27.156 SYMLINK libspdk_thread.so 00:02:27.414 CC lib/init/json_config.o 00:02:27.415 CC lib/init/subsystem.o 00:02:27.415 CC lib/init/rpc.o 00:02:27.415 CC lib/init/subsystem_rpc.o 00:02:27.415 CC lib/fsdev/fsdev.o 00:02:27.415 CC lib/fsdev/fsdev_io.o 00:02:27.415 CC lib/fsdev/fsdev_rpc.o 00:02:27.415 CC lib/blob/request.o 00:02:27.415 CC lib/blob/blobstore.o 00:02:27.415 CC lib/vfu_tgt/tgt_endpoint.o 00:02:27.415 CC lib/blob/zeroes.o 00:02:27.415 CC lib/vfu_tgt/tgt_rpc.o 00:02:27.415 CC lib/blob/blob_bs_dev.o 00:02:27.415 CC lib/virtio/virtio.o 00:02:27.415 CC lib/virtio/virtio_pci.o 00:02:27.415 CC lib/virtio/virtio_vhost_user.o 00:02:27.415 CC lib/virtio/virtio_vfio_user.o 00:02:27.415 CC lib/accel/accel_rpc.o 00:02:27.415 CC lib/accel/accel.o 00:02:27.415 CC lib/accel/accel_sw.o 00:02:27.673 LIB libspdk_init.a 00:02:27.673 SO libspdk_init.so.6.0 00:02:27.673 LIB libspdk_virtio.a 00:02:27.673 LIB libspdk_vfu_tgt.a 00:02:27.673 SYMLINK libspdk_init.so 00:02:27.673 SO libspdk_virtio.so.7.0 00:02:27.673 SO libspdk_vfu_tgt.so.3.0 00:02:27.673 SYMLINK libspdk_virtio.so 00:02:27.673 SYMLINK libspdk_vfu_tgt.so 00:02:27.931 LIB libspdk_fsdev.a 00:02:27.931 SO libspdk_fsdev.so.2.0 00:02:27.931 CC lib/event/app.o 00:02:27.931 SYMLINK libspdk_fsdev.so 00:02:27.931 CC lib/event/reactor.o 00:02:27.931 CC lib/event/log_rpc.o 00:02:27.931 CC lib/event/app_rpc.o 00:02:27.931 CC lib/event/scheduler_static.o 00:02:28.190 LIB libspdk_accel.a 00:02:28.190 SO libspdk_accel.so.16.0 00:02:28.190 SYMLINK libspdk_accel.so 00:02:28.190 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:28.190 LIB libspdk_nvme.a 00:02:28.449 LIB libspdk_event.a 00:02:28.449 SO libspdk_event.so.14.0 00:02:28.449 SO libspdk_nvme.so.15.0 00:02:28.449 SYMLINK libspdk_event.so 00:02:28.707 CC lib/bdev/bdev.o 00:02:28.707 CC lib/bdev/bdev_rpc.o 00:02:28.707 CC lib/bdev/part.o 00:02:28.707 CC lib/bdev/bdev_zone.o 00:02:28.707 CC lib/bdev/scsi_nvme.o 00:02:28.707 SYMLINK libspdk_nvme.so 00:02:28.707 LIB libspdk_fuse_dispatcher.a 00:02:28.707 SO libspdk_fuse_dispatcher.so.1.0 00:02:28.966 SYMLINK libspdk_fuse_dispatcher.so 00:02:29.533 LIB libspdk_blob.a 00:02:29.533 SO libspdk_blob.so.12.0 00:02:29.533 SYMLINK libspdk_blob.so 00:02:30.100 CC lib/lvol/lvol.o 00:02:30.100 CC lib/blobfs/blobfs.o 00:02:30.100 CC lib/blobfs/tree.o 00:02:30.359 LIB libspdk_bdev.a 00:02:30.618 SO libspdk_bdev.so.17.0 00:02:30.618 LIB libspdk_blobfs.a 00:02:30.618 SO libspdk_blobfs.so.11.0 00:02:30.618 LIB libspdk_lvol.a 00:02:30.618 SYMLINK libspdk_bdev.so 00:02:30.618 SO libspdk_lvol.so.11.0 00:02:30.618 SYMLINK libspdk_blobfs.so 00:02:30.618 SYMLINK libspdk_lvol.so 00:02:30.877 CC lib/nbd/nbd.o 00:02:30.877 CC lib/nbd/nbd_rpc.o 00:02:30.877 CC lib/scsi/dev.o 00:02:30.877 CC lib/ftl/ftl_core.o 00:02:30.877 CC lib/ublk/ublk.o 00:02:30.877 CC lib/scsi/lun.o 00:02:30.877 CC lib/ftl/ftl_init.o 00:02:30.877 CC lib/ublk/ublk_rpc.o 00:02:30.877 CC lib/scsi/port.o 00:02:30.877 CC lib/ftl/ftl_layout.o 00:02:30.877 CC lib/scsi/scsi.o 00:02:30.877 CC lib/ftl/ftl_debug.o 00:02:30.877 CC lib/scsi/scsi_bdev.o 00:02:30.877 CC lib/ftl/ftl_io.o 00:02:30.877 CC lib/ftl/ftl_sb.o 00:02:30.877 CC lib/scsi/scsi_pr.o 00:02:30.877 CC lib/nvmf/ctrlr.o 00:02:30.877 CC lib/scsi/scsi_rpc.o 00:02:30.877 CC lib/ftl/ftl_l2p.o 00:02:30.877 CC lib/scsi/task.o 00:02:30.877 CC lib/nvmf/ctrlr_discovery.o 00:02:30.877 CC lib/ftl/ftl_l2p_flat.o 00:02:30.877 CC lib/nvmf/ctrlr_bdev.o 00:02:30.877 CC lib/ftl/ftl_nv_cache.o 00:02:30.877 CC lib/nvmf/subsystem.o 00:02:30.877 CC lib/nvmf/nvmf_rpc.o 00:02:30.877 CC lib/ftl/ftl_band.o 00:02:30.877 CC lib/nvmf/nvmf.o 00:02:30.877 CC lib/ftl/ftl_band_ops.o 00:02:30.877 CC lib/ftl/ftl_writer.o 00:02:30.877 CC lib/nvmf/tcp.o 00:02:30.877 CC lib/nvmf/transport.o 00:02:30.877 CC lib/ftl/ftl_rq.o 00:02:30.877 CC lib/nvmf/vfio_user.o 00:02:30.877 CC lib/nvmf/stubs.o 00:02:30.877 CC lib/nvmf/mdns_server.o 00:02:30.877 CC lib/ftl/ftl_reloc.o 00:02:30.877 CC lib/ftl/ftl_l2p_cache.o 00:02:30.877 CC lib/ftl/ftl_p2l_log.o 00:02:30.877 CC lib/nvmf/auth.o 00:02:30.877 CC lib/nvmf/rdma.o 00:02:30.877 CC lib/ftl/ftl_p2l.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:30.877 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:30.877 CC lib/ftl/utils/ftl_conf.o 00:02:30.877 CC lib/ftl/utils/ftl_md.o 00:02:30.877 CC lib/ftl/utils/ftl_bitmap.o 00:02:30.877 CC lib/ftl/utils/ftl_mempool.o 00:02:30.877 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:30.877 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:30.877 CC lib/ftl/utils/ftl_property.o 00:02:30.877 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.877 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.877 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:30.877 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.877 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.877 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:30.877 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.877 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.877 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.877 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:30.877 CC lib/ftl/base/ftl_base_dev.o 00:02:30.877 CC lib/ftl/ftl_trace.o 00:02:30.877 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:30.877 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.443 LIB libspdk_nbd.a 00:02:31.443 LIB libspdk_scsi.a 00:02:31.443 SO libspdk_nbd.so.7.0 00:02:31.702 SO libspdk_scsi.so.9.0 00:02:31.702 SYMLINK libspdk_nbd.so 00:02:31.702 SYMLINK libspdk_scsi.so 00:02:31.702 LIB libspdk_ublk.a 00:02:31.702 SO libspdk_ublk.so.3.0 00:02:31.960 LIB libspdk_ftl.a 00:02:31.960 SYMLINK libspdk_ublk.so 00:02:31.960 CC lib/vhost/vhost.o 00:02:31.960 CC lib/vhost/vhost_rpc.o 00:02:31.960 CC lib/vhost/vhost_blk.o 00:02:31.960 CC lib/vhost/vhost_scsi.o 00:02:31.960 CC lib/vhost/rte_vhost_user.o 00:02:31.960 SO libspdk_ftl.so.9.0 00:02:31.960 CC lib/iscsi/conn.o 00:02:31.961 CC lib/iscsi/init_grp.o 00:02:31.961 CC lib/iscsi/iscsi.o 00:02:31.961 CC lib/iscsi/param.o 00:02:31.961 CC lib/iscsi/portal_grp.o 00:02:31.961 CC lib/iscsi/tgt_node.o 00:02:31.961 CC lib/iscsi/iscsi_subsystem.o 00:02:31.961 CC lib/iscsi/iscsi_rpc.o 00:02:31.961 CC lib/iscsi/task.o 00:02:32.219 SYMLINK libspdk_ftl.so 00:02:32.788 LIB libspdk_nvmf.a 00:02:32.788 LIB libspdk_vhost.a 00:02:32.788 SO libspdk_nvmf.so.20.0 00:02:32.788 SO libspdk_vhost.so.8.0 00:02:32.788 SYMLINK libspdk_vhost.so 00:02:33.046 SYMLINK libspdk_nvmf.so 00:02:33.046 LIB libspdk_iscsi.a 00:02:33.046 SO libspdk_iscsi.so.8.0 00:02:33.046 SYMLINK libspdk_iscsi.so 00:02:33.614 CC module/env_dpdk/env_dpdk_rpc.o 00:02:33.614 CC module/vfu_device/vfu_virtio.o 00:02:33.614 CC module/vfu_device/vfu_virtio_scsi.o 00:02:33.614 CC module/vfu_device/vfu_virtio_blk.o 00:02:33.614 CC module/vfu_device/vfu_virtio_rpc.o 00:02:33.614 CC module/vfu_device/vfu_virtio_fs.o 00:02:33.872 CC module/keyring/file/keyring_rpc.o 00:02:33.872 CC module/keyring/file/keyring.o 00:02:33.872 CC module/accel/dsa/accel_dsa_rpc.o 00:02:33.872 CC module/accel/dsa/accel_dsa.o 00:02:33.872 LIB libspdk_env_dpdk_rpc.a 00:02:33.872 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:33.872 CC module/sock/posix/posix.o 00:02:33.872 CC module/scheduler/gscheduler/gscheduler.o 00:02:33.872 CC module/keyring/linux/keyring.o 00:02:33.872 CC module/accel/error/accel_error.o 00:02:33.872 CC module/fsdev/aio/fsdev_aio.o 00:02:33.872 CC module/keyring/linux/keyring_rpc.o 00:02:33.872 CC module/accel/error/accel_error_rpc.o 00:02:33.872 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:33.872 CC module/accel/ioat/accel_ioat.o 00:02:33.872 CC module/accel/ioat/accel_ioat_rpc.o 00:02:33.872 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:33.872 CC module/fsdev/aio/linux_aio_mgr.o 00:02:33.872 CC module/blob/bdev/blob_bdev.o 00:02:33.872 CC module/accel/iaa/accel_iaa.o 00:02:33.872 CC module/accel/iaa/accel_iaa_rpc.o 00:02:33.872 SO libspdk_env_dpdk_rpc.so.6.0 00:02:33.872 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.872 LIB libspdk_keyring_file.a 00:02:33.872 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.872 LIB libspdk_keyring_linux.a 00:02:34.131 LIB libspdk_scheduler_gscheduler.a 00:02:34.131 SO libspdk_keyring_file.so.2.0 00:02:34.131 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:34.131 SO libspdk_keyring_linux.so.1.0 00:02:34.131 LIB libspdk_scheduler_dynamic.a 00:02:34.131 SO libspdk_scheduler_gscheduler.so.4.0 00:02:34.131 LIB libspdk_accel_ioat.a 00:02:34.131 LIB libspdk_accel_iaa.a 00:02:34.131 LIB libspdk_accel_error.a 00:02:34.131 SO libspdk_scheduler_dynamic.so.4.0 00:02:34.131 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:34.131 LIB libspdk_accel_dsa.a 00:02:34.131 SO libspdk_accel_iaa.so.3.0 00:02:34.131 SYMLINK libspdk_keyring_file.so 00:02:34.131 SO libspdk_accel_ioat.so.6.0 00:02:34.131 SO libspdk_accel_error.so.2.0 00:02:34.131 SYMLINK libspdk_keyring_linux.so 00:02:34.131 SYMLINK libspdk_scheduler_gscheduler.so 00:02:34.131 SO libspdk_accel_dsa.so.5.0 00:02:34.131 LIB libspdk_blob_bdev.a 00:02:34.131 SYMLINK libspdk_scheduler_dynamic.so 00:02:34.131 SYMLINK libspdk_accel_iaa.so 00:02:34.131 SYMLINK libspdk_accel_ioat.so 00:02:34.131 SO libspdk_blob_bdev.so.12.0 00:02:34.131 SYMLINK libspdk_accel_error.so 00:02:34.131 SYMLINK libspdk_accel_dsa.so 00:02:34.131 LIB libspdk_vfu_device.a 00:02:34.131 SYMLINK libspdk_blob_bdev.so 00:02:34.131 SO libspdk_vfu_device.so.3.0 00:02:34.390 SYMLINK libspdk_vfu_device.so 00:02:34.390 LIB libspdk_fsdev_aio.a 00:02:34.390 SO libspdk_fsdev_aio.so.1.0 00:02:34.390 LIB libspdk_sock_posix.a 00:02:34.390 SO libspdk_sock_posix.so.6.0 00:02:34.390 SYMLINK libspdk_fsdev_aio.so 00:02:34.649 SYMLINK libspdk_sock_posix.so 00:02:34.649 CC module/bdev/gpt/vbdev_gpt.o 00:02:34.649 CC module/bdev/delay/vbdev_delay.o 00:02:34.649 CC module/bdev/gpt/gpt.o 00:02:34.649 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:34.649 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:34.649 CC module/blobfs/bdev/blobfs_bdev.o 00:02:34.649 CC module/bdev/passthru/vbdev_passthru.o 00:02:34.649 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:34.649 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:34.649 CC module/bdev/nvme/bdev_nvme.o 00:02:34.649 CC module/bdev/split/vbdev_split.o 00:02:34.649 CC module/bdev/split/vbdev_split_rpc.o 00:02:34.649 CC module/bdev/nvme/vbdev_opal.o 00:02:34.649 CC module/bdev/nvme/nvme_rpc.o 00:02:34.649 CC module/bdev/nvme/bdev_mdns_client.o 00:02:34.649 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:34.649 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:34.649 CC module/bdev/error/vbdev_error.o 00:02:34.649 CC module/bdev/error/vbdev_error_rpc.o 00:02:34.649 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:34.649 CC module/bdev/lvol/vbdev_lvol.o 00:02:34.649 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:34.649 CC module/bdev/aio/bdev_aio.o 00:02:34.649 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.649 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:34.649 CC module/bdev/null/bdev_null_rpc.o 00:02:34.649 CC module/bdev/null/bdev_null.o 00:02:34.649 CC module/bdev/ftl/bdev_ftl.o 00:02:34.649 CC module/bdev/aio/bdev_aio_rpc.o 00:02:34.649 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:34.649 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:34.649 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:34.649 CC module/bdev/raid/bdev_raid.o 00:02:34.649 CC module/bdev/raid/bdev_raid_rpc.o 00:02:34.649 CC module/bdev/raid/bdev_raid_sb.o 00:02:34.649 CC module/bdev/raid/raid1.o 00:02:34.649 CC module/bdev/raid/raid0.o 00:02:34.649 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:34.649 CC module/bdev/malloc/bdev_malloc.o 00:02:34.649 CC module/bdev/raid/concat.o 00:02:34.649 CC module/bdev/iscsi/bdev_iscsi.o 00:02:34.649 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:34.907 LIB libspdk_bdev_split.a 00:02:34.907 LIB libspdk_blobfs_bdev.a 00:02:34.907 LIB libspdk_bdev_gpt.a 00:02:34.907 SO libspdk_bdev_split.so.6.0 00:02:34.907 SO libspdk_blobfs_bdev.so.6.0 00:02:34.907 LIB libspdk_bdev_error.a 00:02:34.907 SO libspdk_bdev_gpt.so.6.0 00:02:34.907 LIB libspdk_bdev_null.a 00:02:34.907 SO libspdk_bdev_error.so.6.0 00:02:35.166 SYMLINK libspdk_bdev_split.so 00:02:35.166 LIB libspdk_bdev_aio.a 00:02:35.166 LIB libspdk_bdev_ftl.a 00:02:35.166 LIB libspdk_bdev_passthru.a 00:02:35.166 SO libspdk_bdev_null.so.6.0 00:02:35.166 SYMLINK libspdk_blobfs_bdev.so 00:02:35.166 SYMLINK libspdk_bdev_gpt.so 00:02:35.166 SO libspdk_bdev_aio.so.6.0 00:02:35.166 SO libspdk_bdev_ftl.so.6.0 00:02:35.166 SYMLINK libspdk_bdev_error.so 00:02:35.166 LIB libspdk_bdev_delay.a 00:02:35.166 LIB libspdk_bdev_zone_block.a 00:02:35.166 SO libspdk_bdev_passthru.so.6.0 00:02:35.166 LIB libspdk_bdev_iscsi.a 00:02:35.166 SYMLINK libspdk_bdev_null.so 00:02:35.166 SO libspdk_bdev_delay.so.6.0 00:02:35.166 SO libspdk_bdev_zone_block.so.6.0 00:02:35.166 SO libspdk_bdev_iscsi.so.6.0 00:02:35.166 LIB libspdk_bdev_malloc.a 00:02:35.166 SYMLINK libspdk_bdev_aio.so 00:02:35.166 SYMLINK libspdk_bdev_ftl.so 00:02:35.166 SYMLINK libspdk_bdev_passthru.so 00:02:35.166 SO libspdk_bdev_malloc.so.6.0 00:02:35.166 SYMLINK libspdk_bdev_iscsi.so 00:02:35.166 SYMLINK libspdk_bdev_delay.so 00:02:35.166 SYMLINK libspdk_bdev_zone_block.so 00:02:35.166 LIB libspdk_bdev_lvol.a 00:02:35.166 LIB libspdk_bdev_virtio.a 00:02:35.166 SO libspdk_bdev_lvol.so.6.0 00:02:35.166 SYMLINK libspdk_bdev_malloc.so 00:02:35.166 SO libspdk_bdev_virtio.so.6.0 00:02:35.166 SYMLINK libspdk_bdev_lvol.so 00:02:35.424 SYMLINK libspdk_bdev_virtio.so 00:02:35.683 LIB libspdk_bdev_raid.a 00:02:35.683 SO libspdk_bdev_raid.so.6.0 00:02:35.683 SYMLINK libspdk_bdev_raid.so 00:02:36.619 LIB libspdk_bdev_nvme.a 00:02:36.619 SO libspdk_bdev_nvme.so.7.1 00:02:36.619 SYMLINK libspdk_bdev_nvme.so 00:02:37.556 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.556 CC module/event/subsystems/vmd/vmd.o 00:02:37.556 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.556 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.556 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.556 CC module/event/subsystems/sock/sock.o 00:02:37.556 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.556 CC module/event/subsystems/keyring/keyring.o 00:02:37.556 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.556 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.556 LIB libspdk_event_scheduler.a 00:02:37.556 LIB libspdk_event_iobuf.a 00:02:37.556 LIB libspdk_event_fsdev.a 00:02:37.556 LIB libspdk_event_vmd.a 00:02:37.556 LIB libspdk_event_vhost_blk.a 00:02:37.556 LIB libspdk_event_keyring.a 00:02:37.556 LIB libspdk_event_vfu_tgt.a 00:02:37.556 LIB libspdk_event_sock.a 00:02:37.556 SO libspdk_event_scheduler.so.4.0 00:02:37.556 SO libspdk_event_iobuf.so.3.0 00:02:37.556 SO libspdk_event_vhost_blk.so.3.0 00:02:37.556 SO libspdk_event_fsdev.so.1.0 00:02:37.556 SO libspdk_event_vmd.so.6.0 00:02:37.556 SO libspdk_event_keyring.so.1.0 00:02:37.556 SO libspdk_event_sock.so.5.0 00:02:37.556 SO libspdk_event_vfu_tgt.so.3.0 00:02:37.556 SYMLINK libspdk_event_scheduler.so 00:02:37.556 SYMLINK libspdk_event_fsdev.so 00:02:37.556 SYMLINK libspdk_event_vmd.so 00:02:37.556 SYMLINK libspdk_event_vhost_blk.so 00:02:37.556 SYMLINK libspdk_event_keyring.so 00:02:37.556 SYMLINK libspdk_event_iobuf.so 00:02:37.556 SYMLINK libspdk_event_sock.so 00:02:37.556 SYMLINK libspdk_event_vfu_tgt.so 00:02:37.816 CC module/event/subsystems/accel/accel.o 00:02:38.076 LIB libspdk_event_accel.a 00:02:38.076 SO libspdk_event_accel.so.6.0 00:02:38.076 SYMLINK libspdk_event_accel.so 00:02:38.335 CC module/event/subsystems/bdev/bdev.o 00:02:38.594 LIB libspdk_event_bdev.a 00:02:38.594 SO libspdk_event_bdev.so.6.0 00:02:38.594 SYMLINK libspdk_event_bdev.so 00:02:39.162 CC module/event/subsystems/nbd/nbd.o 00:02:39.162 CC module/event/subsystems/ublk/ublk.o 00:02:39.162 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.162 CC module/event/subsystems/scsi/scsi.o 00:02:39.162 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.162 LIB libspdk_event_nbd.a 00:02:39.162 LIB libspdk_event_ublk.a 00:02:39.162 LIB libspdk_event_scsi.a 00:02:39.162 SO libspdk_event_nbd.so.6.0 00:02:39.162 SO libspdk_event_ublk.so.3.0 00:02:39.162 SO libspdk_event_scsi.so.6.0 00:02:39.162 LIB libspdk_event_nvmf.a 00:02:39.162 SYMLINK libspdk_event_nbd.so 00:02:39.162 SYMLINK libspdk_event_ublk.so 00:02:39.162 SO libspdk_event_nvmf.so.6.0 00:02:39.421 SYMLINK libspdk_event_scsi.so 00:02:39.421 SYMLINK libspdk_event_nvmf.so 00:02:39.680 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.680 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.680 LIB libspdk_event_vhost_scsi.a 00:02:39.680 LIB libspdk_event_iscsi.a 00:02:39.680 SO libspdk_event_vhost_scsi.so.3.0 00:02:39.680 SO libspdk_event_iscsi.so.6.0 00:02:39.939 SYMLINK libspdk_event_vhost_scsi.so 00:02:39.939 SYMLINK libspdk_event_iscsi.so 00:02:39.939 SO libspdk.so.6.0 00:02:39.939 SYMLINK libspdk.so 00:02:40.518 CC app/trace_record/trace_record.o 00:02:40.518 CXX app/trace/trace.o 00:02:40.518 CC app/spdk_top/spdk_top.o 00:02:40.518 CC app/spdk_nvme_identify/identify.o 00:02:40.518 CC app/spdk_nvme_discover/discovery_aer.o 00:02:40.518 CC test/rpc_client/rpc_client_test.o 00:02:40.518 CC app/spdk_lspci/spdk_lspci.o 00:02:40.518 CC app/spdk_nvme_perf/perf.o 00:02:40.518 TEST_HEADER include/spdk/accel_module.h 00:02:40.518 TEST_HEADER include/spdk/accel.h 00:02:40.518 TEST_HEADER include/spdk/barrier.h 00:02:40.518 TEST_HEADER include/spdk/assert.h 00:02:40.518 TEST_HEADER include/spdk/base64.h 00:02:40.518 TEST_HEADER include/spdk/bdev_module.h 00:02:40.518 TEST_HEADER include/spdk/bdev.h 00:02:40.518 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.518 TEST_HEADER include/spdk/bit_array.h 00:02:40.518 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.518 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.518 TEST_HEADER include/spdk/blobfs.h 00:02:40.518 TEST_HEADER include/spdk/bit_pool.h 00:02:40.518 TEST_HEADER include/spdk/blob.h 00:02:40.518 TEST_HEADER include/spdk/cpuset.h 00:02:40.518 TEST_HEADER include/spdk/config.h 00:02:40.518 TEST_HEADER include/spdk/conf.h 00:02:40.518 TEST_HEADER include/spdk/crc16.h 00:02:40.518 TEST_HEADER include/spdk/crc64.h 00:02:40.518 TEST_HEADER include/spdk/crc32.h 00:02:40.518 TEST_HEADER include/spdk/dif.h 00:02:40.518 TEST_HEADER include/spdk/endian.h 00:02:40.518 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.518 TEST_HEADER include/spdk/dma.h 00:02:40.518 TEST_HEADER include/spdk/event.h 00:02:40.518 TEST_HEADER include/spdk/env.h 00:02:40.518 TEST_HEADER include/spdk/fd.h 00:02:40.518 TEST_HEADER include/spdk/fd_group.h 00:02:40.518 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.518 TEST_HEADER include/spdk/fsdev.h 00:02:40.518 TEST_HEADER include/spdk/ftl.h 00:02:40.518 TEST_HEADER include/spdk/file.h 00:02:40.518 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.518 TEST_HEADER include/spdk/hexlify.h 00:02:40.518 TEST_HEADER include/spdk/histogram_data.h 00:02:40.518 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:40.518 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.518 TEST_HEADER include/spdk/idxd.h 00:02:40.518 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.518 TEST_HEADER include/spdk/init.h 00:02:40.518 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.518 TEST_HEADER include/spdk/ioat.h 00:02:40.518 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.518 TEST_HEADER include/spdk/keyring.h 00:02:40.518 TEST_HEADER include/spdk/json.h 00:02:40.518 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.518 TEST_HEADER include/spdk/likely.h 00:02:40.518 TEST_HEADER include/spdk/keyring_module.h 00:02:40.518 TEST_HEADER include/spdk/log.h 00:02:40.518 TEST_HEADER include/spdk/lvol.h 00:02:40.518 TEST_HEADER include/spdk/md5.h 00:02:40.518 TEST_HEADER include/spdk/mmio.h 00:02:40.518 TEST_HEADER include/spdk/nbd.h 00:02:40.518 CC app/nvmf_tgt/nvmf_main.o 00:02:40.518 TEST_HEADER include/spdk/memory.h 00:02:40.518 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.518 CC app/spdk_dd/spdk_dd.o 00:02:40.518 TEST_HEADER include/spdk/nvme.h 00:02:40.518 TEST_HEADER include/spdk/notify.h 00:02:40.518 TEST_HEADER include/spdk/net.h 00:02:40.518 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.518 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.518 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.518 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.518 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.518 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.518 TEST_HEADER include/spdk/nvmf.h 00:02:40.518 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.518 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.518 TEST_HEADER include/spdk/opal.h 00:02:40.518 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.518 TEST_HEADER include/spdk/opal_spec.h 00:02:40.518 TEST_HEADER include/spdk/pci_ids.h 00:02:40.518 TEST_HEADER include/spdk/queue.h 00:02:40.518 TEST_HEADER include/spdk/reduce.h 00:02:40.518 TEST_HEADER include/spdk/pipe.h 00:02:40.518 TEST_HEADER include/spdk/rpc.h 00:02:40.518 TEST_HEADER include/spdk/scsi.h 00:02:40.518 TEST_HEADER include/spdk/scheduler.h 00:02:40.518 TEST_HEADER include/spdk/sock.h 00:02:40.518 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.518 TEST_HEADER include/spdk/stdinc.h 00:02:40.518 TEST_HEADER include/spdk/string.h 00:02:40.518 TEST_HEADER include/spdk/trace.h 00:02:40.518 TEST_HEADER include/spdk/thread.h 00:02:40.518 TEST_HEADER include/spdk/trace_parser.h 00:02:40.518 TEST_HEADER include/spdk/tree.h 00:02:40.518 TEST_HEADER include/spdk/util.h 00:02:40.518 TEST_HEADER include/spdk/uuid.h 00:02:40.518 TEST_HEADER include/spdk/version.h 00:02:40.518 TEST_HEADER include/spdk/ublk.h 00:02:40.518 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.518 TEST_HEADER include/spdk/vhost.h 00:02:40.518 TEST_HEADER include/spdk/xor.h 00:02:40.518 CC app/spdk_tgt/spdk_tgt.o 00:02:40.518 TEST_HEADER include/spdk/vmd.h 00:02:40.518 TEST_HEADER include/spdk/zipf.h 00:02:40.518 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.518 CXX test/cpp_headers/accel_module.o 00:02:40.518 CXX test/cpp_headers/accel.o 00:02:40.518 CXX test/cpp_headers/assert.o 00:02:40.518 CXX test/cpp_headers/barrier.o 00:02:40.518 CXX test/cpp_headers/base64.o 00:02:40.518 CXX test/cpp_headers/bdev.o 00:02:40.518 CXX test/cpp_headers/bit_array.o 00:02:40.518 CXX test/cpp_headers/bdev_zone.o 00:02:40.518 CXX test/cpp_headers/bit_pool.o 00:02:40.518 CXX test/cpp_headers/bdev_module.o 00:02:40.518 CXX test/cpp_headers/blobfs_bdev.o 00:02:40.518 CXX test/cpp_headers/blob_bdev.o 00:02:40.518 CXX test/cpp_headers/conf.o 00:02:40.518 CXX test/cpp_headers/blobfs.o 00:02:40.518 CXX test/cpp_headers/cpuset.o 00:02:40.518 CXX test/cpp_headers/blob.o 00:02:40.518 CXX test/cpp_headers/config.o 00:02:40.518 CXX test/cpp_headers/crc64.o 00:02:40.518 CXX test/cpp_headers/crc32.o 00:02:40.518 CXX test/cpp_headers/crc16.o 00:02:40.518 CXX test/cpp_headers/dif.o 00:02:40.518 CXX test/cpp_headers/endian.o 00:02:40.518 CXX test/cpp_headers/env.o 00:02:40.518 CXX test/cpp_headers/event.o 00:02:40.518 CXX test/cpp_headers/dma.o 00:02:40.518 CXX test/cpp_headers/env_dpdk.o 00:02:40.518 CXX test/cpp_headers/fd_group.o 00:02:40.518 CXX test/cpp_headers/fd.o 00:02:40.518 CXX test/cpp_headers/fsdev.o 00:02:40.518 CXX test/cpp_headers/file.o 00:02:40.518 CXX test/cpp_headers/fsdev_module.o 00:02:40.518 CXX test/cpp_headers/gpt_spec.o 00:02:40.518 CXX test/cpp_headers/fuse_dispatcher.o 00:02:40.518 CXX test/cpp_headers/ftl.o 00:02:40.518 CXX test/cpp_headers/histogram_data.o 00:02:40.518 CXX test/cpp_headers/hexlify.o 00:02:40.518 CXX test/cpp_headers/idxd.o 00:02:40.518 CXX test/cpp_headers/idxd_spec.o 00:02:40.518 CXX test/cpp_headers/init.o 00:02:40.518 CXX test/cpp_headers/ioat.o 00:02:40.518 CXX test/cpp_headers/ioat_spec.o 00:02:40.518 CXX test/cpp_headers/json.o 00:02:40.518 CXX test/cpp_headers/jsonrpc.o 00:02:40.518 CXX test/cpp_headers/keyring.o 00:02:40.518 CXX test/cpp_headers/iscsi_spec.o 00:02:40.518 CXX test/cpp_headers/likely.o 00:02:40.518 CXX test/cpp_headers/log.o 00:02:40.518 CXX test/cpp_headers/keyring_module.o 00:02:40.519 CXX test/cpp_headers/lvol.o 00:02:40.519 CXX test/cpp_headers/md5.o 00:02:40.519 CXX test/cpp_headers/mmio.o 00:02:40.519 CXX test/cpp_headers/net.o 00:02:40.519 CXX test/cpp_headers/memory.o 00:02:40.519 CXX test/cpp_headers/nbd.o 00:02:40.519 CXX test/cpp_headers/nvme_intel.o 00:02:40.519 CXX test/cpp_headers/notify.o 00:02:40.519 CXX test/cpp_headers/nvme.o 00:02:40.519 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.519 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.519 CXX test/cpp_headers/nvme_spec.o 00:02:40.519 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.519 CXX test/cpp_headers/nvme_zns.o 00:02:40.519 CXX test/cpp_headers/nvmf.o 00:02:40.519 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.519 CXX test/cpp_headers/nvmf_spec.o 00:02:40.519 CXX test/cpp_headers/nvmf_transport.o 00:02:40.519 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.519 CC examples/util/zipf/zipf.o 00:02:40.519 CC app/fio/nvme/fio_plugin.o 00:02:40.519 CXX test/cpp_headers/opal.o 00:02:40.519 CC test/env/vtophys/vtophys.o 00:02:40.519 CC test/env/memory/memory_ut.o 00:02:40.519 CC examples/ioat/verify/verify.o 00:02:40.519 CC examples/ioat/perf/perf.o 00:02:40.519 CC test/dma/test_dma/test_dma.o 00:02:40.519 CC test/app/histogram_perf/histogram_perf.o 00:02:40.519 CC test/app/stub/stub.o 00:02:40.519 CXX test/cpp_headers/opal_spec.o 00:02:40.519 CC test/env/pci/pci_ut.o 00:02:40.519 CC test/app/jsoncat/jsoncat.o 00:02:40.519 CC test/thread/poller_perf/poller_perf.o 00:02:40.798 LINK spdk_lspci 00:02:40.798 CC test/app/bdev_svc/bdev_svc.o 00:02:40.798 CC app/fio/bdev/fio_plugin.o 00:02:41.064 LINK interrupt_tgt 00:02:41.064 LINK rpc_client_test 00:02:41.064 LINK iscsi_tgt 00:02:41.064 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:41.064 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.064 LINK spdk_nvme_discover 00:02:41.064 LINK spdk_tgt 00:02:41.064 LINK nvmf_tgt 00:02:41.064 CXX test/cpp_headers/pci_ids.o 00:02:41.064 CXX test/cpp_headers/pipe.o 00:02:41.064 CXX test/cpp_headers/queue.o 00:02:41.064 LINK histogram_perf 00:02:41.064 CXX test/cpp_headers/reduce.o 00:02:41.064 CXX test/cpp_headers/rpc.o 00:02:41.064 CXX test/cpp_headers/scheduler.o 00:02:41.064 LINK spdk_trace_record 00:02:41.064 CXX test/cpp_headers/scsi.o 00:02:41.064 CXX test/cpp_headers/scsi_spec.o 00:02:41.064 CXX test/cpp_headers/sock.o 00:02:41.064 CXX test/cpp_headers/stdinc.o 00:02:41.064 CXX test/cpp_headers/string.o 00:02:41.064 CXX test/cpp_headers/thread.o 00:02:41.064 CXX test/cpp_headers/trace.o 00:02:41.064 CXX test/cpp_headers/trace_parser.o 00:02:41.064 CXX test/cpp_headers/ublk.o 00:02:41.064 CXX test/cpp_headers/tree.o 00:02:41.064 CXX test/cpp_headers/util.o 00:02:41.064 CXX test/cpp_headers/uuid.o 00:02:41.064 CXX test/cpp_headers/vfio_user_pci.o 00:02:41.064 CXX test/cpp_headers/vfio_user_spec.o 00:02:41.064 CXX test/cpp_headers/version.o 00:02:41.064 CXX test/cpp_headers/vmd.o 00:02:41.064 CXX test/cpp_headers/xor.o 00:02:41.064 CXX test/cpp_headers/vhost.o 00:02:41.064 CXX test/cpp_headers/zipf.o 00:02:41.323 LINK zipf 00:02:41.323 LINK verify 00:02:41.323 LINK bdev_svc 00:02:41.323 LINK vtophys 00:02:41.323 LINK ioat_perf 00:02:41.323 LINK jsoncat 00:02:41.323 LINK env_dpdk_post_init 00:02:41.323 LINK poller_perf 00:02:41.323 LINK stub 00:02:41.323 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:41.323 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:41.323 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.323 LINK spdk_dd 00:02:41.582 LINK pci_ut 00:02:41.582 LINK spdk_trace 00:02:41.582 LINK spdk_nvme 00:02:41.582 LINK test_dma 00:02:41.582 LINK nvme_fuzz 00:02:41.582 LINK spdk_bdev 00:02:41.582 LINK spdk_nvme_perf 00:02:41.841 CC test/event/reactor_perf/reactor_perf.o 00:02:41.841 CC test/event/event_perf/event_perf.o 00:02:41.841 CC test/event/reactor/reactor.o 00:02:41.841 CC test/event/app_repeat/app_repeat.o 00:02:41.841 LINK vhost_fuzz 00:02:41.841 CC test/event/scheduler/scheduler.o 00:02:41.841 CC examples/vmd/led/led.o 00:02:41.841 CC examples/sock/hello_world/hello_sock.o 00:02:41.841 CC examples/vmd/lsvmd/lsvmd.o 00:02:41.841 CC examples/idxd/perf/perf.o 00:02:41.841 LINK spdk_top 00:02:41.841 LINK spdk_nvme_identify 00:02:41.841 CC examples/thread/thread/thread_ex.o 00:02:41.841 LINK reactor 00:02:41.841 LINK reactor_perf 00:02:41.841 LINK event_perf 00:02:41.841 LINK mem_callbacks 00:02:41.841 CC app/vhost/vhost.o 00:02:41.841 LINK app_repeat 00:02:42.100 LINK lsvmd 00:02:42.100 LINK led 00:02:42.100 LINK scheduler 00:02:42.100 LINK hello_sock 00:02:42.100 CC test/nvme/compliance/nvme_compliance.o 00:02:42.100 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.100 CC test/nvme/reset/reset.o 00:02:42.100 CC test/nvme/fdp/fdp.o 00:02:42.100 CC test/nvme/simple_copy/simple_copy.o 00:02:42.100 CC test/nvme/startup/startup.o 00:02:42.100 CC test/nvme/overhead/overhead.o 00:02:42.100 CC test/nvme/e2edp/nvme_dp.o 00:02:42.100 CC test/nvme/aer/aer.o 00:02:42.100 CC test/nvme/reserve/reserve.o 00:02:42.100 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.100 CC test/nvme/err_injection/err_injection.o 00:02:42.100 CC test/nvme/connect_stress/connect_stress.o 00:02:42.100 CC test/accel/dif/dif.o 00:02:42.100 CC test/nvme/sgl/sgl.o 00:02:42.100 CC test/nvme/cuse/cuse.o 00:02:42.100 CC test/blobfs/mkfs/mkfs.o 00:02:42.100 CC test/nvme/boot_partition/boot_partition.o 00:02:42.100 LINK vhost 00:02:42.100 LINK thread 00:02:42.100 LINK idxd_perf 00:02:42.100 LINK memory_ut 00:02:42.100 CC test/lvol/esnap/esnap.o 00:02:42.357 LINK doorbell_aers 00:02:42.357 LINK connect_stress 00:02:42.357 LINK startup 00:02:42.357 LINK boot_partition 00:02:42.357 LINK err_injection 00:02:42.357 LINK fused_ordering 00:02:42.357 LINK simple_copy 00:02:42.357 LINK reserve 00:02:42.357 LINK mkfs 00:02:42.357 LINK sgl 00:02:42.357 LINK reset 00:02:42.357 LINK overhead 00:02:42.357 LINK nvme_dp 00:02:42.357 LINK nvme_compliance 00:02:42.357 LINK aer 00:02:42.357 LINK fdp 00:02:42.616 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.616 CC examples/nvme/reconnect/reconnect.o 00:02:42.616 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.616 CC examples/nvme/abort/abort.o 00:02:42.616 CC examples/nvme/arbitration/arbitration.o 00:02:42.616 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:42.616 CC examples/nvme/hello_world/hello_world.o 00:02:42.616 CC examples/nvme/hotplug/hotplug.o 00:02:42.616 LINK pmr_persistence 00:02:42.616 CC examples/accel/perf/accel_perf.o 00:02:42.616 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:42.616 CC examples/blob/hello_world/hello_blob.o 00:02:42.616 CC examples/blob/cli/blobcli.o 00:02:42.616 LINK cmb_copy 00:02:42.616 LINK dif 00:02:42.616 LINK hello_world 00:02:42.874 LINK hotplug 00:02:42.874 LINK reconnect 00:02:42.874 LINK arbitration 00:02:42.874 LINK iscsi_fuzz 00:02:42.874 LINK abort 00:02:42.874 LINK nvme_manage 00:02:42.874 LINK hello_blob 00:02:42.874 LINK hello_fsdev 00:02:43.132 LINK accel_perf 00:02:43.132 LINK blobcli 00:02:43.132 LINK cuse 00:02:43.132 CC test/bdev/bdevio/bdevio.o 00:02:43.699 LINK bdevio 00:02:43.699 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.699 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.699 LINK hello_bdev 00:02:44.266 LINK bdevperf 00:02:44.833 CC examples/nvmf/nvmf/nvmf.o 00:02:44.833 LINK nvmf 00:02:45.769 LINK esnap 00:02:46.027 00:02:46.027 real 0m55.758s 00:02:46.027 user 8m24.250s 00:02:46.027 sys 3m48.557s 00:02:46.027 04:39:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.027 04:39:36 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.027 ************************************ 00:02:46.027 END TEST make 00:02:46.027 ************************************ 00:02:46.027 04:39:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.027 04:39:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.027 04:39:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.027 04:39:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.027 04:39:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.027 04:39:37 -- pm/common@44 -- $ pid=351905 00:02:46.027 04:39:37 -- pm/common@50 -- $ kill -TERM 351905 00:02:46.027 04:39:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.027 04:39:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.027 04:39:37 -- pm/common@44 -- $ pid=351906 00:02:46.027 04:39:37 -- pm/common@50 -- $ kill -TERM 351906 00:02:46.027 04:39:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.027 04:39:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.027 04:39:37 -- pm/common@44 -- $ pid=351909 00:02:46.027 04:39:37 -- pm/common@50 -- $ kill -TERM 351909 00:02:46.027 04:39:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.027 04:39:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.027 04:39:37 -- pm/common@44 -- $ pid=351937 00:02:46.028 04:39:37 -- pm/common@50 -- $ sudo -E kill -TERM 351937 00:02:46.028 04:39:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:46.028 04:39:37 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:46.028 04:39:37 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:46.028 04:39:37 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:46.028 04:39:37 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:46.286 04:39:37 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:46.287 04:39:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:46.287 04:39:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:46.287 04:39:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:46.287 04:39:37 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.287 04:39:37 -- scripts/common.sh@336 -- # read -ra ver1 00:02:46.287 04:39:37 -- scripts/common.sh@337 -- # IFS=.-: 00:02:46.287 04:39:37 -- scripts/common.sh@337 -- # read -ra ver2 00:02:46.287 04:39:37 -- scripts/common.sh@338 -- # local 'op=<' 00:02:46.287 04:39:37 -- scripts/common.sh@340 -- # ver1_l=2 00:02:46.287 04:39:37 -- scripts/common.sh@341 -- # ver2_l=1 00:02:46.287 04:39:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:46.287 04:39:37 -- scripts/common.sh@344 -- # case "$op" in 00:02:46.287 04:39:37 -- scripts/common.sh@345 -- # : 1 00:02:46.287 04:39:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:46.287 04:39:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.287 04:39:37 -- scripts/common.sh@365 -- # decimal 1 00:02:46.287 04:39:37 -- scripts/common.sh@353 -- # local d=1 00:02:46.287 04:39:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.287 04:39:37 -- scripts/common.sh@355 -- # echo 1 00:02:46.287 04:39:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:46.287 04:39:37 -- scripts/common.sh@366 -- # decimal 2 00:02:46.287 04:39:37 -- scripts/common.sh@353 -- # local d=2 00:02:46.287 04:39:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.287 04:39:37 -- scripts/common.sh@355 -- # echo 2 00:02:46.287 04:39:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:46.287 04:39:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:46.287 04:39:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:46.287 04:39:37 -- scripts/common.sh@368 -- # return 0 00:02:46.287 04:39:37 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.287 04:39:37 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:46.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.287 --rc genhtml_branch_coverage=1 00:02:46.287 --rc genhtml_function_coverage=1 00:02:46.287 --rc genhtml_legend=1 00:02:46.287 --rc geninfo_all_blocks=1 00:02:46.287 --rc geninfo_unexecuted_blocks=1 00:02:46.287 00:02:46.287 ' 00:02:46.287 04:39:37 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:46.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.287 --rc genhtml_branch_coverage=1 00:02:46.287 --rc genhtml_function_coverage=1 00:02:46.287 --rc genhtml_legend=1 00:02:46.287 --rc geninfo_all_blocks=1 00:02:46.287 --rc geninfo_unexecuted_blocks=1 00:02:46.287 00:02:46.287 ' 00:02:46.287 04:39:37 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:46.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.287 --rc genhtml_branch_coverage=1 00:02:46.287 --rc genhtml_function_coverage=1 00:02:46.287 --rc genhtml_legend=1 00:02:46.287 --rc geninfo_all_blocks=1 00:02:46.287 --rc geninfo_unexecuted_blocks=1 00:02:46.287 00:02:46.287 ' 00:02:46.287 04:39:37 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:46.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.287 --rc genhtml_branch_coverage=1 00:02:46.287 --rc genhtml_function_coverage=1 00:02:46.287 --rc genhtml_legend=1 00:02:46.287 --rc geninfo_all_blocks=1 00:02:46.287 --rc geninfo_unexecuted_blocks=1 00:02:46.287 00:02:46.287 ' 00:02:46.287 04:39:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.287 04:39:37 -- nvmf/common.sh@7 -- # uname -s 00:02:46.287 04:39:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.287 04:39:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.287 04:39:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.287 04:39:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.287 04:39:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.287 04:39:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.287 04:39:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.287 04:39:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.287 04:39:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:46.287 04:39:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:46.287 04:39:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:46.287 04:39:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:46.287 04:39:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:46.287 04:39:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:46.287 04:39:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:46.287 04:39:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:46.287 04:39:37 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:46.287 04:39:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:46.287 04:39:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:46.287 04:39:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.287 04:39:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.287 04:39:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.287 04:39:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.287 04:39:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.287 04:39:37 -- paths/export.sh@5 -- # export PATH 00:02:46.287 04:39:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.287 04:39:37 -- nvmf/common.sh@51 -- # : 0 00:02:46.287 04:39:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:46.287 04:39:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:46.287 04:39:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:46.287 04:39:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:46.287 04:39:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:46.287 04:39:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:46.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:46.287 04:39:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:46.287 04:39:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:46.287 04:39:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:46.287 04:39:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:46.287 04:39:37 -- spdk/autotest.sh@32 -- # uname -s 00:02:46.287 04:39:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:46.287 04:39:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:46.287 04:39:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:46.287 04:39:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:46.287 04:39:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:46.287 04:39:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:46.287 04:39:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:46.287 04:39:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:46.287 04:39:37 -- spdk/autotest.sh@48 -- # udevadm_pid=414539 00:02:46.287 04:39:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:46.287 04:39:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:46.287 04:39:37 -- pm/common@17 -- # local monitor 00:02:46.287 04:39:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.287 04:39:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.287 04:39:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.287 04:39:37 -- pm/common@21 -- # date +%s 00:02:46.287 04:39:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.287 04:39:37 -- pm/common@21 -- # date +%s 00:02:46.287 04:39:37 -- pm/common@25 -- # sleep 1 00:02:46.287 04:39:37 -- pm/common@21 -- # date +%s 00:02:46.287 04:39:37 -- pm/common@21 -- # date +%s 00:02:46.287 04:39:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733801977 00:02:46.287 04:39:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733801977 00:02:46.287 04:39:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733801977 00:02:46.287 04:39:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733801977 00:02:46.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733801977_collect-cpu-load.pm.log 00:02:46.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733801977_collect-vmstat.pm.log 00:02:46.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733801977_collect-cpu-temp.pm.log 00:02:46.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733801977_collect-bmc-pm.bmc.pm.log 00:02:47.225 04:39:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.225 04:39:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:47.225 04:39:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:47.225 04:39:38 -- common/autotest_common.sh@10 -- # set +x 00:02:47.225 04:39:38 -- spdk/autotest.sh@59 -- # create_test_list 00:02:47.225 04:39:38 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:47.225 04:39:38 -- common/autotest_common.sh@10 -- # set +x 00:02:47.225 04:39:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:47.225 04:39:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.225 04:39:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.225 04:39:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:47.225 04:39:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.225 04:39:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:47.225 04:39:38 -- common/autotest_common.sh@1457 -- # uname 00:02:47.225 04:39:38 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:47.225 04:39:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:47.225 04:39:38 -- common/autotest_common.sh@1477 -- # uname 00:02:47.225 04:39:38 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:47.225 04:39:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:47.225 04:39:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:47.483 lcov: LCOV version 1.15 00:02:47.484 04:39:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:59.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:59.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:14.565 04:40:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:14.565 04:40:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.565 04:40:03 -- common/autotest_common.sh@10 -- # set +x 00:03:14.565 04:40:03 -- spdk/autotest.sh@78 -- # rm -f 00:03:14.565 04:40:03 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.134 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:15.134 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:15.393 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:15.652 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:15.652 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:15.652 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:15.652 04:40:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:15.652 04:40:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:15.652 04:40:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:15.652 04:40:06 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:15.652 04:40:06 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:15.652 04:40:06 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:15.652 04:40:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:15.652 04:40:06 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:15.652 04:40:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:15.652 04:40:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:15.652 04:40:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:15.652 04:40:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.652 04:40:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:15.652 04:40:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:15.652 04:40:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:15.652 04:40:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:15.652 04:40:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:15.652 04:40:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:15.652 04:40:06 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:15.652 No valid GPT data, bailing 00:03:15.652 04:40:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:15.652 04:40:06 -- scripts/common.sh@394 -- # pt= 00:03:15.652 04:40:06 -- scripts/common.sh@395 -- # return 1 00:03:15.652 04:40:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:15.652 1+0 records in 00:03:15.652 1+0 records out 00:03:15.652 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00130459 s, 804 MB/s 00:03:15.652 04:40:06 -- spdk/autotest.sh@105 -- # sync 00:03:15.652 04:40:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:15.652 04:40:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:15.652 04:40:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:22.219 04:40:12 -- spdk/autotest.sh@111 -- # uname -s 00:03:22.219 04:40:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:22.219 04:40:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:22.219 04:40:12 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.124 Hugepages 00:03:24.124 node hugesize free / total 00:03:24.124 node0 1048576kB 0 / 0 00:03:24.124 node0 2048kB 0 / 0 00:03:24.124 node1 1048576kB 0 / 0 00:03:24.124 node1 2048kB 0 / 0 00:03:24.124 00:03:24.124 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.124 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:24.124 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:24.124 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:24.124 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:24.124 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:24.124 04:40:15 -- spdk/autotest.sh@117 -- # uname -s 00:03:24.124 04:40:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:24.124 04:40:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:24.124 04:40:15 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.413 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:27.413 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.672 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.931 04:40:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:28.868 04:40:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:28.868 04:40:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:28.868 04:40:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:28.868 04:40:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:28.868 04:40:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:28.868 04:40:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:28.868 04:40:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.868 04:40:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:28.868 04:40:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:28.868 04:40:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:28.868 04:40:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:28.868 04:40:19 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.155 Waiting for block devices as requested 00:03:32.155 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:32.155 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:32.155 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:32.156 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:32.156 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:32.156 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:32.156 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:32.414 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:32.414 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:32.414 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:32.674 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:32.674 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:32.674 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:32.932 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:32.932 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:32.932 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:32.932 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:33.191 04:40:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:33.192 04:40:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:33.192 04:40:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:33.192 04:40:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:33.192 04:40:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:33.192 04:40:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:33.192 04:40:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:33.192 04:40:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:33.192 04:40:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:33.192 04:40:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:33.192 04:40:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:33.192 04:40:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:33.192 04:40:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:33.192 04:40:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:33.192 04:40:24 -- common/autotest_common.sh@1543 -- # continue 00:03:33.192 04:40:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:33.192 04:40:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.192 04:40:24 -- common/autotest_common.sh@10 -- # set +x 00:03:33.192 04:40:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:33.192 04:40:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.192 04:40:24 -- common/autotest_common.sh@10 -- # set +x 00:03:33.192 04:40:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.484 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.484 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.052 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.052 04:40:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:37.052 04:40:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:37.052 04:40:28 -- common/autotest_common.sh@10 -- # set +x 00:03:37.052 04:40:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:37.052 04:40:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:37.052 04:40:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:37.052 04:40:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:37.052 04:40:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:37.052 04:40:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:37.052 04:40:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:37.052 04:40:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:37.052 04:40:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:37.052 04:40:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:37.052 04:40:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.052 04:40:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.052 04:40:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:37.311 04:40:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:37.311 04:40:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:37.311 04:40:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:37.311 04:40:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:37.311 04:40:28 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:37.311 04:40:28 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:37.311 04:40:28 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:37.311 04:40:28 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:37.311 04:40:28 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:37.311 04:40:28 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:37.311 04:40:28 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=428675 00:03:37.311 04:40:28 -- common/autotest_common.sh@1585 -- # waitforlisten 428675 00:03:37.311 04:40:28 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.311 04:40:28 -- common/autotest_common.sh@835 -- # '[' -z 428675 ']' 00:03:37.311 04:40:28 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.311 04:40:28 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:37.311 04:40:28 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.311 04:40:28 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:37.311 04:40:28 -- common/autotest_common.sh@10 -- # set +x 00:03:37.311 [2024-12-10 04:40:28.255997] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:37.311 [2024-12-10 04:40:28.256051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428675 ] 00:03:37.311 [2024-12-10 04:40:28.328515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.311 [2024-12-10 04:40:28.370137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.570 04:40:28 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:37.570 04:40:28 -- common/autotest_common.sh@868 -- # return 0 00:03:37.570 04:40:28 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:37.570 04:40:28 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:37.570 04:40:28 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:40.854 nvme0n1 00:03:40.854 04:40:31 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:40.854 [2024-12-10 04:40:31.768052] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:40.854 [2024-12-10 04:40:31.768079] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:40.854 request: 00:03:40.854 { 00:03:40.854 "nvme_ctrlr_name": "nvme0", 00:03:40.854 "password": "test", 00:03:40.854 "method": "bdev_nvme_opal_revert", 00:03:40.854 "req_id": 1 00:03:40.854 } 00:03:40.854 Got JSON-RPC error response 00:03:40.854 response: 00:03:40.854 { 00:03:40.854 "code": -32603, 00:03:40.854 "message": "Internal error" 00:03:40.854 } 00:03:40.854 04:40:31 -- common/autotest_common.sh@1591 -- # true 00:03:40.854 04:40:31 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:40.854 04:40:31 -- common/autotest_common.sh@1595 -- # killprocess 428675 00:03:40.854 04:40:31 -- common/autotest_common.sh@954 -- # '[' -z 428675 ']' 00:03:40.855 04:40:31 -- common/autotest_common.sh@958 -- # kill -0 428675 00:03:40.855 04:40:31 -- common/autotest_common.sh@959 -- # uname 00:03:40.855 04:40:31 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:40.855 04:40:31 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428675 00:03:40.855 04:40:31 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:40.855 04:40:31 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:40.855 04:40:31 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428675' 00:03:40.855 killing process with pid 428675 00:03:40.855 04:40:31 -- common/autotest_common.sh@973 -- # kill 428675 00:03:40.855 04:40:31 -- common/autotest_common.sh@978 -- # wait 428675 00:03:42.757 04:40:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:42.757 04:40:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:42.757 04:40:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.757 04:40:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.757 04:40:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:42.757 04:40:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.757 04:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.757 04:40:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:42.757 04:40:33 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.757 04:40:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.757 04:40:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.757 04:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:42.757 ************************************ 00:03:42.757 START TEST env 00:03:42.757 ************************************ 00:03:42.757 04:40:33 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.757 * Looking for test storage... 00:03:42.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.757 04:40:33 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:42.757 04:40:33 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:42.757 04:40:33 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:42.757 04:40:33 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:42.757 04:40:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.757 04:40:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.757 04:40:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.757 04:40:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.757 04:40:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.757 04:40:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.757 04:40:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.757 04:40:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.757 04:40:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.757 04:40:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.757 04:40:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.757 04:40:33 env -- scripts/common.sh@344 -- # case "$op" in 00:03:42.757 04:40:33 env -- scripts/common.sh@345 -- # : 1 00:03:42.757 04:40:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.757 04:40:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.757 04:40:33 env -- scripts/common.sh@365 -- # decimal 1 00:03:42.757 04:40:33 env -- scripts/common.sh@353 -- # local d=1 00:03:42.758 04:40:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.758 04:40:33 env -- scripts/common.sh@355 -- # echo 1 00:03:42.758 04:40:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.758 04:40:33 env -- scripts/common.sh@366 -- # decimal 2 00:03:42.758 04:40:33 env -- scripts/common.sh@353 -- # local d=2 00:03:42.758 04:40:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.758 04:40:33 env -- scripts/common.sh@355 -- # echo 2 00:03:42.758 04:40:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.758 04:40:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.758 04:40:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.758 04:40:33 env -- scripts/common.sh@368 -- # return 0 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:42.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.758 --rc genhtml_branch_coverage=1 00:03:42.758 --rc genhtml_function_coverage=1 00:03:42.758 --rc genhtml_legend=1 00:03:42.758 --rc geninfo_all_blocks=1 00:03:42.758 --rc geninfo_unexecuted_blocks=1 00:03:42.758 00:03:42.758 ' 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:42.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.758 --rc genhtml_branch_coverage=1 00:03:42.758 --rc genhtml_function_coverage=1 00:03:42.758 --rc genhtml_legend=1 00:03:42.758 --rc geninfo_all_blocks=1 00:03:42.758 --rc geninfo_unexecuted_blocks=1 00:03:42.758 00:03:42.758 ' 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:42.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.758 --rc genhtml_branch_coverage=1 00:03:42.758 --rc genhtml_function_coverage=1 00:03:42.758 --rc genhtml_legend=1 00:03:42.758 --rc geninfo_all_blocks=1 00:03:42.758 --rc geninfo_unexecuted_blocks=1 00:03:42.758 00:03:42.758 ' 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:42.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.758 --rc genhtml_branch_coverage=1 00:03:42.758 --rc genhtml_function_coverage=1 00:03:42.758 --rc genhtml_legend=1 00:03:42.758 --rc geninfo_all_blocks=1 00:03:42.758 --rc geninfo_unexecuted_blocks=1 00:03:42.758 00:03:42.758 ' 00:03:42.758 04:40:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.758 04:40:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.758 ************************************ 00:03:42.758 START TEST env_memory 00:03:42.758 ************************************ 00:03:42.758 04:40:33 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.758 00:03:42.758 00:03:42.758 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.758 http://cunit.sourceforge.net/ 00:03:42.758 00:03:42.758 00:03:42.758 Suite: memory 00:03:42.758 Test: alloc and free memory map ...[2024-12-10 04:40:33.677070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.758 passed 00:03:42.758 Test: mem map translation ...[2024-12-10 04:40:33.695665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.758 [2024-12-10 04:40:33.695682] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.758 [2024-12-10 04:40:33.695717] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.758 [2024-12-10 04:40:33.695724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.758 passed 00:03:42.758 Test: mem map registration ...[2024-12-10 04:40:33.731923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:42.758 [2024-12-10 04:40:33.731941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:42.758 passed 00:03:42.758 Test: mem map adjacent registrations ...passed 00:03:42.758 00:03:42.758 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.758 suites 1 1 n/a 0 0 00:03:42.758 tests 4 4 4 0 0 00:03:42.758 asserts 152 152 152 0 n/a 00:03:42.758 00:03:42.758 Elapsed time = 0.134 seconds 00:03:42.758 00:03:42.758 real 0m0.148s 00:03:42.758 user 0m0.142s 00:03:42.758 sys 0m0.005s 00:03:42.758 04:40:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.758 04:40:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.758 ************************************ 00:03:42.758 END TEST env_memory 00:03:42.758 ************************************ 00:03:42.758 04:40:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.758 04:40:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.758 04:40:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.758 ************************************ 00:03:42.758 START TEST env_vtophys 00:03:42.758 ************************************ 00:03:42.758 04:40:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.758 EAL: lib.eal log level changed from notice to debug 00:03:42.758 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.758 EAL: Detected lcore 1 as core 1 on socket 0 00:03:42.758 EAL: Detected lcore 2 as core 2 on socket 0 00:03:42.758 EAL: Detected lcore 3 as core 3 on socket 0 00:03:42.758 EAL: Detected lcore 4 as core 4 on socket 0 00:03:42.758 EAL: Detected lcore 5 as core 5 on socket 0 00:03:42.758 EAL: Detected lcore 6 as core 6 on socket 0 00:03:42.758 EAL: Detected lcore 7 as core 8 on socket 0 00:03:42.758 EAL: Detected lcore 8 as core 9 on socket 0 00:03:42.758 EAL: Detected lcore 9 as core 10 on socket 0 00:03:42.758 EAL: Detected lcore 10 as core 11 on socket 0 00:03:42.758 EAL: Detected lcore 11 as core 12 on socket 0 00:03:42.758 EAL: Detected lcore 12 as core 13 on socket 0 00:03:42.758 EAL: Detected lcore 13 as core 16 on socket 0 00:03:42.758 EAL: Detected lcore 14 as core 17 on socket 0 00:03:42.758 EAL: Detected lcore 15 as core 18 on socket 0 00:03:42.758 EAL: Detected lcore 16 as core 19 on socket 0 00:03:42.758 EAL: Detected lcore 17 as core 20 on socket 0 00:03:42.758 EAL: Detected lcore 18 as core 21 on socket 0 00:03:42.758 EAL: Detected lcore 19 as core 25 on socket 0 00:03:42.758 EAL: Detected lcore 20 as core 26 on socket 0 00:03:42.758 EAL: Detected lcore 21 as core 27 on socket 0 00:03:42.758 EAL: Detected lcore 22 as core 28 on socket 0 00:03:42.758 EAL: Detected lcore 23 as core 29 on socket 0 00:03:42.758 EAL: Detected lcore 24 as core 0 on socket 1 00:03:42.758 EAL: Detected lcore 25 as core 1 on socket 1 00:03:42.758 EAL: Detected lcore 26 as core 2 on socket 1 00:03:42.758 EAL: Detected lcore 27 as core 3 on socket 1 00:03:42.758 EAL: Detected lcore 28 as core 4 on socket 1 00:03:42.758 EAL: Detected lcore 29 as core 5 on socket 1 00:03:42.758 EAL: Detected lcore 30 as core 6 on socket 1 00:03:42.758 EAL: Detected lcore 31 as core 8 on socket 1 00:03:42.758 EAL: Detected lcore 32 as core 9 on socket 1 00:03:42.758 EAL: Detected lcore 33 as core 10 on socket 1 00:03:42.758 EAL: Detected lcore 34 as core 11 on socket 1 00:03:42.758 EAL: Detected lcore 35 as core 12 on socket 1 00:03:42.758 EAL: Detected lcore 36 as core 13 on socket 1 00:03:42.758 EAL: Detected lcore 37 as core 16 on socket 1 00:03:42.758 EAL: Detected lcore 38 as core 17 on socket 1 00:03:42.758 EAL: Detected lcore 39 as core 18 on socket 1 00:03:42.758 EAL: Detected lcore 40 as core 19 on socket 1 00:03:42.758 EAL: Detected lcore 41 as core 20 on socket 1 00:03:42.758 EAL: Detected lcore 42 as core 21 on socket 1 00:03:42.758 EAL: Detected lcore 43 as core 25 on socket 1 00:03:42.758 EAL: Detected lcore 44 as core 26 on socket 1 00:03:42.758 EAL: Detected lcore 45 as core 27 on socket 1 00:03:42.758 EAL: Detected lcore 46 as core 28 on socket 1 00:03:42.758 EAL: Detected lcore 47 as core 29 on socket 1 00:03:42.758 EAL: Detected lcore 48 as core 0 on socket 0 00:03:42.758 EAL: Detected lcore 49 as core 1 on socket 0 00:03:42.758 EAL: Detected lcore 50 as core 2 on socket 0 00:03:42.758 EAL: Detected lcore 51 as core 3 on socket 0 00:03:42.758 EAL: Detected lcore 52 as core 4 on socket 0 00:03:42.758 EAL: Detected lcore 53 as core 5 on socket 0 00:03:42.758 EAL: Detected lcore 54 as core 6 on socket 0 00:03:42.758 EAL: Detected lcore 55 as core 8 on socket 0 00:03:42.758 EAL: Detected lcore 56 as core 9 on socket 0 00:03:42.758 EAL: Detected lcore 57 as core 10 on socket 0 00:03:42.758 EAL: Detected lcore 58 as core 11 on socket 0 00:03:42.758 EAL: Detected lcore 59 as core 12 on socket 0 00:03:42.758 EAL: Detected lcore 60 as core 13 on socket 0 00:03:42.758 EAL: Detected lcore 61 as core 16 on socket 0 00:03:42.758 EAL: Detected lcore 62 as core 17 on socket 0 00:03:42.758 EAL: Detected lcore 63 as core 18 on socket 0 00:03:42.758 EAL: Detected lcore 64 as core 19 on socket 0 00:03:42.758 EAL: Detected lcore 65 as core 20 on socket 0 00:03:42.758 EAL: Detected lcore 66 as core 21 on socket 0 00:03:42.758 EAL: Detected lcore 67 as core 25 on socket 0 00:03:42.758 EAL: Detected lcore 68 as core 26 on socket 0 00:03:42.758 EAL: Detected lcore 69 as core 27 on socket 0 00:03:42.758 EAL: Detected lcore 70 as core 28 on socket 0 00:03:42.758 EAL: Detected lcore 71 as core 29 on socket 0 00:03:42.758 EAL: Detected lcore 72 as core 0 on socket 1 00:03:42.758 EAL: Detected lcore 73 as core 1 on socket 1 00:03:42.758 EAL: Detected lcore 74 as core 2 on socket 1 00:03:42.758 EAL: Detected lcore 75 as core 3 on socket 1 00:03:42.758 EAL: Detected lcore 76 as core 4 on socket 1 00:03:42.758 EAL: Detected lcore 77 as core 5 on socket 1 00:03:42.758 EAL: Detected lcore 78 as core 6 on socket 1 00:03:42.758 EAL: Detected lcore 79 as core 8 on socket 1 00:03:42.758 EAL: Detected lcore 80 as core 9 on socket 1 00:03:42.758 EAL: Detected lcore 81 as core 10 on socket 1 00:03:42.758 EAL: Detected lcore 82 as core 11 on socket 1 00:03:42.758 EAL: Detected lcore 83 as core 12 on socket 1 00:03:42.758 EAL: Detected lcore 84 as core 13 on socket 1 00:03:42.758 EAL: Detected lcore 85 as core 16 on socket 1 00:03:42.759 EAL: Detected lcore 86 as core 17 on socket 1 00:03:42.759 EAL: Detected lcore 87 as core 18 on socket 1 00:03:42.759 EAL: Detected lcore 88 as core 19 on socket 1 00:03:42.759 EAL: Detected lcore 89 as core 20 on socket 1 00:03:42.759 EAL: Detected lcore 90 as core 21 on socket 1 00:03:42.759 EAL: Detected lcore 91 as core 25 on socket 1 00:03:42.759 EAL: Detected lcore 92 as core 26 on socket 1 00:03:42.759 EAL: Detected lcore 93 as core 27 on socket 1 00:03:42.759 EAL: Detected lcore 94 as core 28 on socket 1 00:03:42.759 EAL: Detected lcore 95 as core 29 on socket 1 00:03:42.759 EAL: Maximum logical cores by configuration: 128 00:03:42.759 EAL: Detected CPU lcores: 96 00:03:42.759 EAL: Detected NUMA nodes: 2 00:03:42.759 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:42.759 EAL: Detected shared linkage of DPDK 00:03:42.759 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.018 EAL: Bus pci wants IOVA as 'DC' 00:03:43.018 EAL: Buses did not request a specific IOVA mode. 00:03:43.018 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:43.018 EAL: Selected IOVA mode 'VA' 00:03:43.018 EAL: Probing VFIO support... 00:03:43.018 EAL: IOMMU type 1 (Type 1) is supported 00:03:43.018 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:43.018 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:43.018 EAL: VFIO support initialized 00:03:43.018 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.018 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:43.018 EAL: Setting up physically contiguous memory... 00:03:43.018 EAL: Setting maximum number of open files to 524288 00:03:43.018 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:43.018 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:43.018 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:43.018 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.018 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:43.018 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.018 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.018 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:43.018 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:43.018 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.018 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:43.018 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.018 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.018 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:43.018 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:43.018 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.018 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:43.019 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.019 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.019 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:43.019 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:43.019 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.019 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:43.019 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.019 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.019 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:43.019 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:43.019 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:43.019 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.019 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:43.019 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.019 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.019 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:43.019 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:43.019 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.019 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:43.019 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.019 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.019 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:43.019 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:43.019 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.019 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:43.019 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.019 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.019 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:43.019 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:43.019 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.019 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:43.019 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.019 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.019 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:43.019 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:43.019 EAL: Hugepages will be freed exactly as allocated. 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: TSC frequency is ~2100000 KHz 00:03:43.019 EAL: Main lcore 0 is ready (tid=7fc2760bda00;cpuset=[0]) 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 0 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.019 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.019 00:03:43.019 00:03:43.019 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.019 http://cunit.sourceforge.net/ 00:03:43.019 00:03:43.019 00:03:43.019 Suite: components_suite 00:03:43.019 Test: vtophys_malloc_test ...passed 00:03:43.019 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.019 EAL: Trying to obtain current memory policy. 00:03:43.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.019 EAL: Restoring previous memory policy: 4 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.019 EAL: request: mp_malloc_sync 00:03:43.019 EAL: No shared files mode enabled, IPC is disabled 00:03:43.019 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.019 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.279 EAL: request: mp_malloc_sync 00:03:43.279 EAL: No shared files mode enabled, IPC is disabled 00:03:43.279 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.279 EAL: Trying to obtain current memory policy. 00:03:43.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.279 EAL: Restoring previous memory policy: 4 00:03:43.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.279 EAL: request: mp_malloc_sync 00:03:43.279 EAL: No shared files mode enabled, IPC is disabled 00:03:43.279 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.538 EAL: request: mp_malloc_sync 00:03:43.538 EAL: No shared files mode enabled, IPC is disabled 00:03:43.538 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.538 EAL: Trying to obtain current memory policy. 00:03:43.538 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.538 EAL: Restoring previous memory policy: 4 00:03:43.538 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.538 EAL: request: mp_malloc_sync 00:03:43.538 EAL: No shared files mode enabled, IPC is disabled 00:03:43.538 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.797 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.057 EAL: request: mp_malloc_sync 00:03:44.057 EAL: No shared files mode enabled, IPC is disabled 00:03:44.057 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.057 passed 00:03:44.057 00:03:44.057 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.057 suites 1 1 n/a 0 0 00:03:44.057 tests 2 2 2 0 0 00:03:44.057 asserts 497 497 497 0 n/a 00:03:44.057 00:03:44.057 Elapsed time = 0.969 seconds 00:03:44.057 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.057 EAL: request: mp_malloc_sync 00:03:44.057 EAL: No shared files mode enabled, IPC is disabled 00:03:44.057 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.057 EAL: No shared files mode enabled, IPC is disabled 00:03:44.057 EAL: No shared files mode enabled, IPC is disabled 00:03:44.057 EAL: No shared files mode enabled, IPC is disabled 00:03:44.057 00:03:44.057 real 0m1.102s 00:03:44.057 user 0m0.644s 00:03:44.057 sys 0m0.427s 00:03:44.057 04:40:34 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.057 04:40:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:44.057 ************************************ 00:03:44.057 END TEST env_vtophys 00:03:44.057 ************************************ 00:03:44.057 04:40:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.057 04:40:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.057 04:40:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.057 04:40:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.057 ************************************ 00:03:44.057 START TEST env_pci 00:03:44.057 ************************************ 00:03:44.057 04:40:35 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.057 00:03:44.057 00:03:44.057 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.057 http://cunit.sourceforge.net/ 00:03:44.057 00:03:44.057 00:03:44.057 Suite: pci 00:03:44.057 Test: pci_hook ...[2024-12-10 04:40:35.038399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 429911 has claimed it 00:03:44.057 EAL: Cannot find device (10000:00:01.0) 00:03:44.057 EAL: Failed to attach device on primary process 00:03:44.057 passed 00:03:44.057 00:03:44.057 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.057 suites 1 1 n/a 0 0 00:03:44.057 tests 1 1 1 0 0 00:03:44.057 asserts 25 25 25 0 n/a 00:03:44.057 00:03:44.057 Elapsed time = 0.025 seconds 00:03:44.057 00:03:44.057 real 0m0.045s 00:03:44.057 user 0m0.012s 00:03:44.057 sys 0m0.033s 00:03:44.058 04:40:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.058 04:40:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:44.058 ************************************ 00:03:44.058 END TEST env_pci 00:03:44.058 ************************************ 00:03:44.058 04:40:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.058 04:40:35 env -- env/env.sh@15 -- # uname 00:03:44.058 04:40:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.058 04:40:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.058 04:40:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.058 04:40:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:44.058 04:40:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.058 04:40:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.058 ************************************ 00:03:44.058 START TEST env_dpdk_post_init 00:03:44.058 ************************************ 00:03:44.058 04:40:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.058 EAL: Detected CPU lcores: 96 00:03:44.058 EAL: Detected NUMA nodes: 2 00:03:44.058 EAL: Detected shared linkage of DPDK 00:03:44.058 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.318 EAL: Selected IOVA mode 'VA' 00:03:44.318 EAL: VFIO support initialized 00:03:44.318 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.318 EAL: Using IOMMU type 1 (Type 1) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:44.318 EAL: Ignore mapping IO port bar(1) 00:03:44.318 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:45.280 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:45.280 EAL: Ignore mapping IO port bar(1) 00:03:45.280 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:48.573 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:48.573 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:48.573 Starting DPDK initialization... 00:03:48.573 Starting SPDK post initialization... 00:03:48.573 SPDK NVMe probe 00:03:48.573 Attaching to 0000:5e:00.0 00:03:48.573 Attached to 0000:5e:00.0 00:03:48.573 Cleaning up... 00:03:48.573 00:03:48.573 real 0m4.414s 00:03:48.573 user 0m3.031s 00:03:48.573 sys 0m0.457s 00:03:48.573 04:40:39 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.573 04:40:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.573 ************************************ 00:03:48.573 END TEST env_dpdk_post_init 00:03:48.573 ************************************ 00:03:48.573 04:40:39 env -- env/env.sh@26 -- # uname 00:03:48.573 04:40:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.573 04:40:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.573 04:40:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.573 04:40:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.573 04:40:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.573 ************************************ 00:03:48.573 START TEST env_mem_callbacks 00:03:48.573 ************************************ 00:03:48.573 04:40:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.573 EAL: Detected CPU lcores: 96 00:03:48.573 EAL: Detected NUMA nodes: 2 00:03:48.573 EAL: Detected shared linkage of DPDK 00:03:48.573 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.573 EAL: Selected IOVA mode 'VA' 00:03:48.573 EAL: VFIO support initialized 00:03:48.573 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.573 00:03:48.573 00:03:48.573 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.573 http://cunit.sourceforge.net/ 00:03:48.573 00:03:48.573 00:03:48.573 Suite: memory 00:03:48.573 Test: test ... 00:03:48.573 register 0x200000200000 2097152 00:03:48.573 malloc 3145728 00:03:48.573 register 0x200000400000 4194304 00:03:48.573 buf 0x200000500000 len 3145728 PASSED 00:03:48.573 malloc 64 00:03:48.573 buf 0x2000004fff40 len 64 PASSED 00:03:48.573 malloc 4194304 00:03:48.573 register 0x200000800000 6291456 00:03:48.573 buf 0x200000a00000 len 4194304 PASSED 00:03:48.573 free 0x200000500000 3145728 00:03:48.573 free 0x2000004fff40 64 00:03:48.573 unregister 0x200000400000 4194304 PASSED 00:03:48.573 free 0x200000a00000 4194304 00:03:48.573 unregister 0x200000800000 6291456 PASSED 00:03:48.573 malloc 8388608 00:03:48.573 register 0x200000400000 10485760 00:03:48.573 buf 0x200000600000 len 8388608 PASSED 00:03:48.573 free 0x200000600000 8388608 00:03:48.573 unregister 0x200000400000 10485760 PASSED 00:03:48.573 passed 00:03:48.573 00:03:48.573 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.573 suites 1 1 n/a 0 0 00:03:48.573 tests 1 1 1 0 0 00:03:48.573 asserts 15 15 15 0 n/a 00:03:48.573 00:03:48.573 Elapsed time = 0.008 seconds 00:03:48.573 00:03:48.573 real 0m0.058s 00:03:48.573 user 0m0.023s 00:03:48.573 sys 0m0.035s 00:03:48.573 04:40:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.573 04:40:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.573 ************************************ 00:03:48.573 END TEST env_mem_callbacks 00:03:48.573 ************************************ 00:03:48.833 00:03:48.833 real 0m6.298s 00:03:48.833 user 0m4.111s 00:03:48.833 sys 0m1.262s 00:03:48.833 04:40:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.833 04:40:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.833 ************************************ 00:03:48.833 END TEST env 00:03:48.833 ************************************ 00:03:48.833 04:40:39 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.833 04:40:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.833 04:40:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.833 04:40:39 -- common/autotest_common.sh@10 -- # set +x 00:03:48.833 ************************************ 00:03:48.833 START TEST rpc 00:03:48.833 ************************************ 00:03:48.833 04:40:39 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.833 * Looking for test storage... 00:03:48.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.833 04:40:39 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:48.833 04:40:39 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:48.833 04:40:39 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:48.833 04:40:39 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:48.833 04:40:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.833 04:40:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.833 04:40:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.833 04:40:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.833 04:40:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.833 04:40:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.833 04:40:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.833 04:40:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.833 04:40:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.833 04:40:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.833 04:40:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.833 04:40:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:48.833 04:40:39 rpc -- scripts/common.sh@345 -- # : 1 00:03:48.833 04:40:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.833 04:40:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.833 04:40:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:48.833 04:40:39 rpc -- scripts/common.sh@353 -- # local d=1 00:03:48.833 04:40:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.833 04:40:39 rpc -- scripts/common.sh@355 -- # echo 1 00:03:48.833 04:40:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.092 04:40:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.092 04:40:39 rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.092 04:40:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.092 04:40:39 rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.092 04:40:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.092 04:40:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.092 04:40:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.092 04:40:39 rpc -- scripts/common.sh@368 -- # return 0 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.092 --rc genhtml_branch_coverage=1 00:03:49.092 --rc genhtml_function_coverage=1 00:03:49.092 --rc genhtml_legend=1 00:03:49.092 --rc geninfo_all_blocks=1 00:03:49.092 --rc geninfo_unexecuted_blocks=1 00:03:49.092 00:03:49.092 ' 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.092 --rc genhtml_branch_coverage=1 00:03:49.092 --rc genhtml_function_coverage=1 00:03:49.092 --rc genhtml_legend=1 00:03:49.092 --rc geninfo_all_blocks=1 00:03:49.092 --rc geninfo_unexecuted_blocks=1 00:03:49.092 00:03:49.092 ' 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.092 --rc genhtml_branch_coverage=1 00:03:49.092 --rc genhtml_function_coverage=1 00:03:49.092 --rc genhtml_legend=1 00:03:49.092 --rc geninfo_all_blocks=1 00:03:49.092 --rc geninfo_unexecuted_blocks=1 00:03:49.092 00:03:49.092 ' 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.092 --rc genhtml_branch_coverage=1 00:03:49.092 --rc genhtml_function_coverage=1 00:03:49.092 --rc genhtml_legend=1 00:03:49.092 --rc geninfo_all_blocks=1 00:03:49.092 --rc geninfo_unexecuted_blocks=1 00:03:49.092 00:03:49.092 ' 00:03:49.092 04:40:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=430760 00:03:49.092 04:40:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:49.092 04:40:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.092 04:40:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 430760 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 430760 ']' 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.092 04:40:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.092 [2024-12-10 04:40:40.029981] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:49.092 [2024-12-10 04:40:40.030031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430760 ] 00:03:49.093 [2024-12-10 04:40:40.107315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.093 [2024-12-10 04:40:40.147993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:49.093 [2024-12-10 04:40:40.148030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 430760' to capture a snapshot of events at runtime. 00:03:49.093 [2024-12-10 04:40:40.148038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:49.093 [2024-12-10 04:40:40.148044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:49.093 [2024-12-10 04:40:40.148050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid430760 for offline analysis/debug. 00:03:49.093 [2024-12-10 04:40:40.148546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.352 04:40:40 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.352 04:40:40 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:49.352 04:40:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.352 04:40:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.352 04:40:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.352 04:40:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.352 04:40:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.352 04:40:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.352 04:40:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.352 ************************************ 00:03:49.352 START TEST rpc_integrity 00:03:49.352 ************************************ 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.352 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.352 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.352 { 00:03:49.352 "name": "Malloc0", 00:03:49.352 "aliases": [ 00:03:49.352 "93e64207-5945-46ea-a737-89bdb9452edf" 00:03:49.352 ], 00:03:49.352 "product_name": "Malloc disk", 00:03:49.352 "block_size": 512, 00:03:49.352 "num_blocks": 16384, 00:03:49.352 "uuid": "93e64207-5945-46ea-a737-89bdb9452edf", 00:03:49.352 "assigned_rate_limits": { 00:03:49.352 "rw_ios_per_sec": 0, 00:03:49.352 "rw_mbytes_per_sec": 0, 00:03:49.352 "r_mbytes_per_sec": 0, 00:03:49.352 "w_mbytes_per_sec": 0 00:03:49.352 }, 00:03:49.352 "claimed": false, 00:03:49.352 "zoned": false, 00:03:49.352 "supported_io_types": { 00:03:49.352 "read": true, 00:03:49.352 "write": true, 00:03:49.352 "unmap": true, 00:03:49.352 "flush": true, 00:03:49.352 "reset": true, 00:03:49.352 "nvme_admin": false, 00:03:49.352 "nvme_io": false, 00:03:49.352 "nvme_io_md": false, 00:03:49.352 "write_zeroes": true, 00:03:49.352 "zcopy": true, 00:03:49.352 "get_zone_info": false, 00:03:49.352 "zone_management": false, 00:03:49.353 "zone_append": false, 00:03:49.353 "compare": false, 00:03:49.353 "compare_and_write": false, 00:03:49.353 "abort": true, 00:03:49.353 "seek_hole": false, 00:03:49.353 "seek_data": false, 00:03:49.353 "copy": true, 00:03:49.353 "nvme_iov_md": false 00:03:49.353 }, 00:03:49.353 "memory_domains": [ 00:03:49.353 { 00:03:49.353 "dma_device_id": "system", 00:03:49.353 "dma_device_type": 1 00:03:49.353 }, 00:03:49.353 { 00:03:49.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.353 "dma_device_type": 2 00:03:49.353 } 00:03:49.353 ], 00:03:49.353 "driver_specific": {} 00:03:49.353 } 00:03:49.353 ]' 00:03:49.353 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 [2024-12-10 04:40:40.522205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.613 [2024-12-10 04:40:40.522233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.613 [2024-12-10 04:40:40.522245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc3e700 00:03:49.613 [2024-12-10 04:40:40.522251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.613 [2024-12-10 04:40:40.523309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.613 [2024-12-10 04:40:40.523330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.613 Passthru0 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.613 { 00:03:49.613 "name": "Malloc0", 00:03:49.613 "aliases": [ 00:03:49.613 "93e64207-5945-46ea-a737-89bdb9452edf" 00:03:49.613 ], 00:03:49.613 "product_name": "Malloc disk", 00:03:49.613 "block_size": 512, 00:03:49.613 "num_blocks": 16384, 00:03:49.613 "uuid": "93e64207-5945-46ea-a737-89bdb9452edf", 00:03:49.613 "assigned_rate_limits": { 00:03:49.613 "rw_ios_per_sec": 0, 00:03:49.613 "rw_mbytes_per_sec": 0, 00:03:49.613 "r_mbytes_per_sec": 0, 00:03:49.613 "w_mbytes_per_sec": 0 00:03:49.613 }, 00:03:49.613 "claimed": true, 00:03:49.613 "claim_type": "exclusive_write", 00:03:49.613 "zoned": false, 00:03:49.613 "supported_io_types": { 00:03:49.613 "read": true, 00:03:49.613 "write": true, 00:03:49.613 "unmap": true, 00:03:49.613 "flush": true, 00:03:49.613 "reset": true, 00:03:49.613 "nvme_admin": false, 00:03:49.613 "nvme_io": false, 00:03:49.613 "nvme_io_md": false, 00:03:49.613 "write_zeroes": true, 00:03:49.613 "zcopy": true, 00:03:49.613 "get_zone_info": false, 00:03:49.613 "zone_management": false, 00:03:49.613 "zone_append": false, 00:03:49.613 "compare": false, 00:03:49.613 "compare_and_write": false, 00:03:49.613 "abort": true, 00:03:49.613 "seek_hole": false, 00:03:49.613 "seek_data": false, 00:03:49.613 "copy": true, 00:03:49.613 "nvme_iov_md": false 00:03:49.613 }, 00:03:49.613 "memory_domains": [ 00:03:49.613 { 00:03:49.613 "dma_device_id": "system", 00:03:49.613 "dma_device_type": 1 00:03:49.613 }, 00:03:49.613 { 00:03:49.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.613 "dma_device_type": 2 00:03:49.613 } 00:03:49.613 ], 00:03:49.613 "driver_specific": {} 00:03:49.613 }, 00:03:49.613 { 00:03:49.613 "name": "Passthru0", 00:03:49.613 "aliases": [ 00:03:49.613 "1e91d6e3-cbba-5e40-8c35-b6993da4ff39" 00:03:49.613 ], 00:03:49.613 "product_name": "passthru", 00:03:49.613 "block_size": 512, 00:03:49.613 "num_blocks": 16384, 00:03:49.613 "uuid": "1e91d6e3-cbba-5e40-8c35-b6993da4ff39", 00:03:49.613 "assigned_rate_limits": { 00:03:49.613 "rw_ios_per_sec": 0, 00:03:49.613 "rw_mbytes_per_sec": 0, 00:03:49.613 "r_mbytes_per_sec": 0, 00:03:49.613 "w_mbytes_per_sec": 0 00:03:49.613 }, 00:03:49.613 "claimed": false, 00:03:49.613 "zoned": false, 00:03:49.613 "supported_io_types": { 00:03:49.613 "read": true, 00:03:49.613 "write": true, 00:03:49.613 "unmap": true, 00:03:49.613 "flush": true, 00:03:49.613 "reset": true, 00:03:49.613 "nvme_admin": false, 00:03:49.613 "nvme_io": false, 00:03:49.613 "nvme_io_md": false, 00:03:49.613 "write_zeroes": true, 00:03:49.613 "zcopy": true, 00:03:49.613 "get_zone_info": false, 00:03:49.613 "zone_management": false, 00:03:49.613 "zone_append": false, 00:03:49.613 "compare": false, 00:03:49.613 "compare_and_write": false, 00:03:49.613 "abort": true, 00:03:49.613 "seek_hole": false, 00:03:49.613 "seek_data": false, 00:03:49.613 "copy": true, 00:03:49.613 "nvme_iov_md": false 00:03:49.613 }, 00:03:49.613 "memory_domains": [ 00:03:49.613 { 00:03:49.613 "dma_device_id": "system", 00:03:49.613 "dma_device_type": 1 00:03:49.613 }, 00:03:49.613 { 00:03:49.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.613 "dma_device_type": 2 00:03:49.613 } 00:03:49.613 ], 00:03:49.613 "driver_specific": { 00:03:49.613 "passthru": { 00:03:49.613 "name": "Passthru0", 00:03:49.613 "base_bdev_name": "Malloc0" 00:03:49.613 } 00:03:49.613 } 00:03:49.613 } 00:03:49.613 ]' 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.613 04:40:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.613 00:03:49.613 real 0m0.268s 00:03:49.613 user 0m0.162s 00:03:49.613 sys 0m0.044s 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 ************************************ 00:03:49.613 END TEST rpc_integrity 00:03:49.613 ************************************ 00:03:49.613 04:40:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.613 04:40:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.613 04:40:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.613 04:40:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.613 ************************************ 00:03:49.613 START TEST rpc_plugins 00:03:49.613 ************************************ 00:03:49.613 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:49.613 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.613 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.613 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.873 { 00:03:49.873 "name": "Malloc1", 00:03:49.873 "aliases": [ 00:03:49.873 "ff75c52d-9e07-4a89-8f62-336e8d42f657" 00:03:49.873 ], 00:03:49.873 "product_name": "Malloc disk", 00:03:49.873 "block_size": 4096, 00:03:49.873 "num_blocks": 256, 00:03:49.873 "uuid": "ff75c52d-9e07-4a89-8f62-336e8d42f657", 00:03:49.873 "assigned_rate_limits": { 00:03:49.873 "rw_ios_per_sec": 0, 00:03:49.873 "rw_mbytes_per_sec": 0, 00:03:49.873 "r_mbytes_per_sec": 0, 00:03:49.873 "w_mbytes_per_sec": 0 00:03:49.873 }, 00:03:49.873 "claimed": false, 00:03:49.873 "zoned": false, 00:03:49.873 "supported_io_types": { 00:03:49.873 "read": true, 00:03:49.873 "write": true, 00:03:49.873 "unmap": true, 00:03:49.873 "flush": true, 00:03:49.873 "reset": true, 00:03:49.873 "nvme_admin": false, 00:03:49.873 "nvme_io": false, 00:03:49.873 "nvme_io_md": false, 00:03:49.873 "write_zeroes": true, 00:03:49.873 "zcopy": true, 00:03:49.873 "get_zone_info": false, 00:03:49.873 "zone_management": false, 00:03:49.873 "zone_append": false, 00:03:49.873 "compare": false, 00:03:49.873 "compare_and_write": false, 00:03:49.873 "abort": true, 00:03:49.873 "seek_hole": false, 00:03:49.873 "seek_data": false, 00:03:49.873 "copy": true, 00:03:49.873 "nvme_iov_md": false 00:03:49.873 }, 00:03:49.873 "memory_domains": [ 00:03:49.873 { 00:03:49.873 "dma_device_id": "system", 00:03:49.873 "dma_device_type": 1 00:03:49.873 }, 00:03:49.873 { 00:03:49.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.873 "dma_device_type": 2 00:03:49.873 } 00:03:49.873 ], 00:03:49.873 "driver_specific": {} 00:03:49.873 } 00:03:49.873 ]' 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:49.873 04:40:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:49.873 00:03:49.873 real 0m0.143s 00:03:49.873 user 0m0.084s 00:03:49.873 sys 0m0.021s 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.873 04:40:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 ************************************ 00:03:49.873 END TEST rpc_plugins 00:03:49.873 ************************************ 00:03:49.873 04:40:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:49.873 04:40:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.873 04:40:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.873 04:40:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 ************************************ 00:03:49.873 START TEST rpc_trace_cmd_test 00:03:49.873 ************************************ 00:03:49.873 04:40:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:49.873 04:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:49.873 04:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:49.873 04:40:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.873 04:40:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.873 04:40:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.874 04:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:49.874 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid430760", 00:03:49.874 "tpoint_group_mask": "0x8", 00:03:49.874 "iscsi_conn": { 00:03:49.874 "mask": "0x2", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "scsi": { 00:03:49.874 "mask": "0x4", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "bdev": { 00:03:49.874 "mask": "0x8", 00:03:49.874 "tpoint_mask": "0xffffffffffffffff" 00:03:49.874 }, 00:03:49.874 "nvmf_rdma": { 00:03:49.874 "mask": "0x10", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "nvmf_tcp": { 00:03:49.874 "mask": "0x20", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "ftl": { 00:03:49.874 "mask": "0x40", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "blobfs": { 00:03:49.874 "mask": "0x80", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "dsa": { 00:03:49.874 "mask": "0x200", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "thread": { 00:03:49.874 "mask": "0x400", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "nvme_pcie": { 00:03:49.874 "mask": "0x800", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "iaa": { 00:03:49.874 "mask": "0x1000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "nvme_tcp": { 00:03:49.874 "mask": "0x2000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "bdev_nvme": { 00:03:49.874 "mask": "0x4000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "sock": { 00:03:49.874 "mask": "0x8000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "blob": { 00:03:49.874 "mask": "0x10000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "bdev_raid": { 00:03:49.874 "mask": "0x20000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 }, 00:03:49.874 "scheduler": { 00:03:49.874 "mask": "0x40000", 00:03:49.874 "tpoint_mask": "0x0" 00:03:49.874 } 00:03:49.874 }' 00:03:49.874 04:40:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:50.133 00:03:50.133 real 0m0.224s 00:03:50.133 user 0m0.186s 00:03:50.133 sys 0m0.029s 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.133 04:40:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:50.133 ************************************ 00:03:50.133 END TEST rpc_trace_cmd_test 00:03:50.133 ************************************ 00:03:50.133 04:40:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:50.133 04:40:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:50.133 04:40:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:50.133 04:40:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.133 04:40:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.133 04:40:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.133 ************************************ 00:03:50.133 START TEST rpc_daemon_integrity 00:03:50.133 ************************************ 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.133 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.393 { 00:03:50.393 "name": "Malloc2", 00:03:50.393 "aliases": [ 00:03:50.393 "3a65b965-16c5-44ff-b222-cad8eaf8a23c" 00:03:50.393 ], 00:03:50.393 "product_name": "Malloc disk", 00:03:50.393 "block_size": 512, 00:03:50.393 "num_blocks": 16384, 00:03:50.393 "uuid": "3a65b965-16c5-44ff-b222-cad8eaf8a23c", 00:03:50.393 "assigned_rate_limits": { 00:03:50.393 "rw_ios_per_sec": 0, 00:03:50.393 "rw_mbytes_per_sec": 0, 00:03:50.393 "r_mbytes_per_sec": 0, 00:03:50.393 "w_mbytes_per_sec": 0 00:03:50.393 }, 00:03:50.393 "claimed": false, 00:03:50.393 "zoned": false, 00:03:50.393 "supported_io_types": { 00:03:50.393 "read": true, 00:03:50.393 "write": true, 00:03:50.393 "unmap": true, 00:03:50.393 "flush": true, 00:03:50.393 "reset": true, 00:03:50.393 "nvme_admin": false, 00:03:50.393 "nvme_io": false, 00:03:50.393 "nvme_io_md": false, 00:03:50.393 "write_zeroes": true, 00:03:50.393 "zcopy": true, 00:03:50.393 "get_zone_info": false, 00:03:50.393 "zone_management": false, 00:03:50.393 "zone_append": false, 00:03:50.393 "compare": false, 00:03:50.393 "compare_and_write": false, 00:03:50.393 "abort": true, 00:03:50.393 "seek_hole": false, 00:03:50.393 "seek_data": false, 00:03:50.393 "copy": true, 00:03:50.393 "nvme_iov_md": false 00:03:50.393 }, 00:03:50.393 "memory_domains": [ 00:03:50.393 { 00:03:50.393 "dma_device_id": "system", 00:03:50.393 "dma_device_type": 1 00:03:50.393 }, 00:03:50.393 { 00:03:50.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.393 "dma_device_type": 2 00:03:50.393 } 00:03:50.393 ], 00:03:50.393 "driver_specific": {} 00:03:50.393 } 00:03:50.393 ]' 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 [2024-12-10 04:40:41.368473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:50.393 [2024-12-10 04:40:41.368500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.393 [2024-12-10 04:40:41.368513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xafa950 00:03:50.393 [2024-12-10 04:40:41.368519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.393 [2024-12-10 04:40:41.369472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.393 [2024-12-10 04:40:41.369492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.393 Passthru0 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.393 { 00:03:50.393 "name": "Malloc2", 00:03:50.393 "aliases": [ 00:03:50.393 "3a65b965-16c5-44ff-b222-cad8eaf8a23c" 00:03:50.393 ], 00:03:50.393 "product_name": "Malloc disk", 00:03:50.393 "block_size": 512, 00:03:50.393 "num_blocks": 16384, 00:03:50.393 "uuid": "3a65b965-16c5-44ff-b222-cad8eaf8a23c", 00:03:50.393 "assigned_rate_limits": { 00:03:50.393 "rw_ios_per_sec": 0, 00:03:50.393 "rw_mbytes_per_sec": 0, 00:03:50.393 "r_mbytes_per_sec": 0, 00:03:50.393 "w_mbytes_per_sec": 0 00:03:50.393 }, 00:03:50.393 "claimed": true, 00:03:50.393 "claim_type": "exclusive_write", 00:03:50.393 "zoned": false, 00:03:50.393 "supported_io_types": { 00:03:50.393 "read": true, 00:03:50.393 "write": true, 00:03:50.393 "unmap": true, 00:03:50.393 "flush": true, 00:03:50.393 "reset": true, 00:03:50.393 "nvme_admin": false, 00:03:50.393 "nvme_io": false, 00:03:50.393 "nvme_io_md": false, 00:03:50.393 "write_zeroes": true, 00:03:50.393 "zcopy": true, 00:03:50.393 "get_zone_info": false, 00:03:50.393 "zone_management": false, 00:03:50.393 "zone_append": false, 00:03:50.393 "compare": false, 00:03:50.393 "compare_and_write": false, 00:03:50.393 "abort": true, 00:03:50.393 "seek_hole": false, 00:03:50.393 "seek_data": false, 00:03:50.393 "copy": true, 00:03:50.393 "nvme_iov_md": false 00:03:50.393 }, 00:03:50.393 "memory_domains": [ 00:03:50.393 { 00:03:50.393 "dma_device_id": "system", 00:03:50.393 "dma_device_type": 1 00:03:50.393 }, 00:03:50.393 { 00:03:50.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.393 "dma_device_type": 2 00:03:50.393 } 00:03:50.393 ], 00:03:50.393 "driver_specific": {} 00:03:50.393 }, 00:03:50.393 { 00:03:50.393 "name": "Passthru0", 00:03:50.393 "aliases": [ 00:03:50.393 "b0d13f48-5f23-52c8-a266-02f2c0b93f9b" 00:03:50.393 ], 00:03:50.393 "product_name": "passthru", 00:03:50.393 "block_size": 512, 00:03:50.393 "num_blocks": 16384, 00:03:50.393 "uuid": "b0d13f48-5f23-52c8-a266-02f2c0b93f9b", 00:03:50.393 "assigned_rate_limits": { 00:03:50.393 "rw_ios_per_sec": 0, 00:03:50.393 "rw_mbytes_per_sec": 0, 00:03:50.393 "r_mbytes_per_sec": 0, 00:03:50.393 "w_mbytes_per_sec": 0 00:03:50.393 }, 00:03:50.393 "claimed": false, 00:03:50.393 "zoned": false, 00:03:50.393 "supported_io_types": { 00:03:50.393 "read": true, 00:03:50.393 "write": true, 00:03:50.393 "unmap": true, 00:03:50.393 "flush": true, 00:03:50.393 "reset": true, 00:03:50.393 "nvme_admin": false, 00:03:50.393 "nvme_io": false, 00:03:50.393 "nvme_io_md": false, 00:03:50.393 "write_zeroes": true, 00:03:50.393 "zcopy": true, 00:03:50.393 "get_zone_info": false, 00:03:50.393 "zone_management": false, 00:03:50.393 "zone_append": false, 00:03:50.393 "compare": false, 00:03:50.393 "compare_and_write": false, 00:03:50.393 "abort": true, 00:03:50.393 "seek_hole": false, 00:03:50.393 "seek_data": false, 00:03:50.393 "copy": true, 00:03:50.393 "nvme_iov_md": false 00:03:50.393 }, 00:03:50.393 "memory_domains": [ 00:03:50.393 { 00:03:50.393 "dma_device_id": "system", 00:03:50.393 "dma_device_type": 1 00:03:50.393 }, 00:03:50.393 { 00:03:50.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.393 "dma_device_type": 2 00:03:50.393 } 00:03:50.393 ], 00:03:50.393 "driver_specific": { 00:03:50.393 "passthru": { 00:03:50.393 "name": "Passthru0", 00:03:50.393 "base_bdev_name": "Malloc2" 00:03:50.393 } 00:03:50.393 } 00:03:50.393 } 00:03:50.393 ]' 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.393 04:40:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.393 00:03:50.393 real 0m0.273s 00:03:50.393 user 0m0.178s 00:03:50.393 sys 0m0.032s 00:03:50.394 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.394 04:40:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.394 ************************************ 00:03:50.394 END TEST rpc_daemon_integrity 00:03:50.394 ************************************ 00:03:50.653 04:40:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:50.653 04:40:41 rpc -- rpc/rpc.sh@84 -- # killprocess 430760 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@954 -- # '[' -z 430760 ']' 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@958 -- # kill -0 430760 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@959 -- # uname 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430760 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430760' 00:03:50.653 killing process with pid 430760 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@973 -- # kill 430760 00:03:50.653 04:40:41 rpc -- common/autotest_common.sh@978 -- # wait 430760 00:03:50.913 00:03:50.913 real 0m2.087s 00:03:50.913 user 0m2.641s 00:03:50.913 sys 0m0.710s 00:03:50.913 04:40:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.913 04:40:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.913 ************************************ 00:03:50.913 END TEST rpc 00:03:50.913 ************************************ 00:03:50.913 04:40:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.913 04:40:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.913 04:40:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.913 04:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:50.913 ************************************ 00:03:50.913 START TEST skip_rpc 00:03:50.913 ************************************ 00:03:50.913 04:40:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.913 * Looking for test storage... 00:03:50.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.913 04:40:42 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.913 04:40:42 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.173 04:40:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.173 --rc genhtml_branch_coverage=1 00:03:51.173 --rc genhtml_function_coverage=1 00:03:51.173 --rc genhtml_legend=1 00:03:51.173 --rc geninfo_all_blocks=1 00:03:51.173 --rc geninfo_unexecuted_blocks=1 00:03:51.173 00:03:51.173 ' 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.173 --rc genhtml_branch_coverage=1 00:03:51.173 --rc genhtml_function_coverage=1 00:03:51.173 --rc genhtml_legend=1 00:03:51.173 --rc geninfo_all_blocks=1 00:03:51.173 --rc geninfo_unexecuted_blocks=1 00:03:51.173 00:03:51.173 ' 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.173 --rc genhtml_branch_coverage=1 00:03:51.173 --rc genhtml_function_coverage=1 00:03:51.173 --rc genhtml_legend=1 00:03:51.173 --rc geninfo_all_blocks=1 00:03:51.173 --rc geninfo_unexecuted_blocks=1 00:03:51.173 00:03:51.173 ' 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.173 --rc genhtml_branch_coverage=1 00:03:51.173 --rc genhtml_function_coverage=1 00:03:51.173 --rc genhtml_legend=1 00:03:51.173 --rc geninfo_all_blocks=1 00:03:51.173 --rc geninfo_unexecuted_blocks=1 00:03:51.173 00:03:51.173 ' 00:03:51.173 04:40:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.173 04:40:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.173 04:40:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.173 04:40:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.173 ************************************ 00:03:51.173 START TEST skip_rpc 00:03:51.173 ************************************ 00:03:51.173 04:40:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:51.173 04:40:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=431383 00:03:51.173 04:40:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.173 04:40:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:51.173 04:40:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:51.173 [2024-12-10 04:40:42.215987] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:51.173 [2024-12-10 04:40:42.216028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431383 ] 00:03:51.173 [2024-12-10 04:40:42.291561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.433 [2024-12-10 04:40:42.330643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:56.710 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 431383 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 431383 ']' 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 431383 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 431383 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 431383' 00:03:56.711 killing process with pid 431383 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 431383 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 431383 00:03:56.711 00:03:56.711 real 0m5.358s 00:03:56.711 user 0m5.110s 00:03:56.711 sys 0m0.285s 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.711 04:40:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.711 ************************************ 00:03:56.711 END TEST skip_rpc 00:03:56.711 ************************************ 00:03:56.711 04:40:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.711 04:40:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.711 04:40:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.711 04:40:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.711 ************************************ 00:03:56.711 START TEST skip_rpc_with_json 00:03:56.711 ************************************ 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=432305 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 432305 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 432305 ']' 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.711 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.711 [2024-12-10 04:40:47.647559] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:03:56.711 [2024-12-10 04:40:47.647602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432305 ] 00:03:56.711 [2024-12-10 04:40:47.722882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.711 [2024-12-10 04:40:47.758384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.970 [2024-12-10 04:40:47.978374] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:56.970 request: 00:03:56.970 { 00:03:56.970 "trtype": "tcp", 00:03:56.970 "method": "nvmf_get_transports", 00:03:56.970 "req_id": 1 00:03:56.970 } 00:03:56.970 Got JSON-RPC error response 00:03:56.970 response: 00:03:56.970 { 00:03:56.970 "code": -19, 00:03:56.970 "message": "No such device" 00:03:56.970 } 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.970 [2024-12-10 04:40:47.990480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.970 04:40:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.230 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.230 04:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.230 { 00:03:57.230 "subsystems": [ 00:03:57.230 { 00:03:57.230 "subsystem": "fsdev", 00:03:57.230 "config": [ 00:03:57.230 { 00:03:57.230 "method": "fsdev_set_opts", 00:03:57.230 "params": { 00:03:57.230 "fsdev_io_pool_size": 65535, 00:03:57.230 "fsdev_io_cache_size": 256 00:03:57.230 } 00:03:57.230 } 00:03:57.230 ] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "vfio_user_target", 00:03:57.230 "config": null 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "keyring", 00:03:57.230 "config": [] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "iobuf", 00:03:57.230 "config": [ 00:03:57.230 { 00:03:57.230 "method": "iobuf_set_options", 00:03:57.230 "params": { 00:03:57.230 "small_pool_count": 8192, 00:03:57.230 "large_pool_count": 1024, 00:03:57.230 "small_bufsize": 8192, 00:03:57.230 "large_bufsize": 135168, 00:03:57.230 "enable_numa": false 00:03:57.230 } 00:03:57.230 } 00:03:57.230 ] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "sock", 00:03:57.230 "config": [ 00:03:57.230 { 00:03:57.230 "method": "sock_set_default_impl", 00:03:57.230 "params": { 00:03:57.230 "impl_name": "posix" 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "sock_impl_set_options", 00:03:57.230 "params": { 00:03:57.230 "impl_name": "ssl", 00:03:57.230 "recv_buf_size": 4096, 00:03:57.230 "send_buf_size": 4096, 00:03:57.230 "enable_recv_pipe": true, 00:03:57.230 "enable_quickack": false, 00:03:57.230 "enable_placement_id": 0, 00:03:57.230 "enable_zerocopy_send_server": true, 00:03:57.230 "enable_zerocopy_send_client": false, 00:03:57.230 "zerocopy_threshold": 0, 00:03:57.230 "tls_version": 0, 00:03:57.230 "enable_ktls": false 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "sock_impl_set_options", 00:03:57.230 "params": { 00:03:57.230 "impl_name": "posix", 00:03:57.230 "recv_buf_size": 2097152, 00:03:57.230 "send_buf_size": 2097152, 00:03:57.230 "enable_recv_pipe": true, 00:03:57.230 "enable_quickack": false, 00:03:57.230 "enable_placement_id": 0, 00:03:57.230 "enable_zerocopy_send_server": true, 00:03:57.230 "enable_zerocopy_send_client": false, 00:03:57.230 "zerocopy_threshold": 0, 00:03:57.230 "tls_version": 0, 00:03:57.230 "enable_ktls": false 00:03:57.230 } 00:03:57.230 } 00:03:57.230 ] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "vmd", 00:03:57.230 "config": [] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "accel", 00:03:57.230 "config": [ 00:03:57.230 { 00:03:57.230 "method": "accel_set_options", 00:03:57.230 "params": { 00:03:57.230 "small_cache_size": 128, 00:03:57.230 "large_cache_size": 16, 00:03:57.230 "task_count": 2048, 00:03:57.230 "sequence_count": 2048, 00:03:57.230 "buf_count": 2048 00:03:57.230 } 00:03:57.230 } 00:03:57.230 ] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "bdev", 00:03:57.230 "config": [ 00:03:57.230 { 00:03:57.230 "method": "bdev_set_options", 00:03:57.230 "params": { 00:03:57.230 "bdev_io_pool_size": 65535, 00:03:57.230 "bdev_io_cache_size": 256, 00:03:57.230 "bdev_auto_examine": true, 00:03:57.230 "iobuf_small_cache_size": 128, 00:03:57.230 "iobuf_large_cache_size": 16 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "bdev_raid_set_options", 00:03:57.230 "params": { 00:03:57.230 "process_window_size_kb": 1024, 00:03:57.230 "process_max_bandwidth_mb_sec": 0 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "bdev_iscsi_set_options", 00:03:57.230 "params": { 00:03:57.230 "timeout_sec": 30 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "bdev_nvme_set_options", 00:03:57.230 "params": { 00:03:57.230 "action_on_timeout": "none", 00:03:57.230 "timeout_us": 0, 00:03:57.230 "timeout_admin_us": 0, 00:03:57.230 "keep_alive_timeout_ms": 10000, 00:03:57.230 "arbitration_burst": 0, 00:03:57.230 "low_priority_weight": 0, 00:03:57.230 "medium_priority_weight": 0, 00:03:57.230 "high_priority_weight": 0, 00:03:57.230 "nvme_adminq_poll_period_us": 10000, 00:03:57.230 "nvme_ioq_poll_period_us": 0, 00:03:57.230 "io_queue_requests": 0, 00:03:57.230 "delay_cmd_submit": true, 00:03:57.230 "transport_retry_count": 4, 00:03:57.230 "bdev_retry_count": 3, 00:03:57.230 "transport_ack_timeout": 0, 00:03:57.230 "ctrlr_loss_timeout_sec": 0, 00:03:57.230 "reconnect_delay_sec": 0, 00:03:57.230 "fast_io_fail_timeout_sec": 0, 00:03:57.230 "disable_auto_failback": false, 00:03:57.230 "generate_uuids": false, 00:03:57.230 "transport_tos": 0, 00:03:57.230 "nvme_error_stat": false, 00:03:57.230 "rdma_srq_size": 0, 00:03:57.230 "io_path_stat": false, 00:03:57.230 "allow_accel_sequence": false, 00:03:57.230 "rdma_max_cq_size": 0, 00:03:57.230 "rdma_cm_event_timeout_ms": 0, 00:03:57.230 "dhchap_digests": [ 00:03:57.230 "sha256", 00:03:57.230 "sha384", 00:03:57.230 "sha512" 00:03:57.230 ], 00:03:57.230 "dhchap_dhgroups": [ 00:03:57.230 "null", 00:03:57.230 "ffdhe2048", 00:03:57.230 "ffdhe3072", 00:03:57.230 "ffdhe4096", 00:03:57.230 "ffdhe6144", 00:03:57.230 "ffdhe8192" 00:03:57.230 ] 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "bdev_nvme_set_hotplug", 00:03:57.230 "params": { 00:03:57.230 "period_us": 100000, 00:03:57.230 "enable": false 00:03:57.230 } 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "method": "bdev_wait_for_examine" 00:03:57.230 } 00:03:57.230 ] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "scsi", 00:03:57.230 "config": null 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "scheduler", 00:03:57.230 "config": [ 00:03:57.230 { 00:03:57.230 "method": "framework_set_scheduler", 00:03:57.230 "params": { 00:03:57.230 "name": "static" 00:03:57.230 } 00:03:57.230 } 00:03:57.230 ] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "vhost_scsi", 00:03:57.230 "config": [] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "vhost_blk", 00:03:57.230 "config": [] 00:03:57.230 }, 00:03:57.230 { 00:03:57.230 "subsystem": "ublk", 00:03:57.230 "config": [] 00:03:57.230 }, 00:03:57.231 { 00:03:57.231 "subsystem": "nbd", 00:03:57.231 "config": [] 00:03:57.231 }, 00:03:57.231 { 00:03:57.231 "subsystem": "nvmf", 00:03:57.231 "config": [ 00:03:57.231 { 00:03:57.231 "method": "nvmf_set_config", 00:03:57.231 "params": { 00:03:57.231 "discovery_filter": "match_any", 00:03:57.231 "admin_cmd_passthru": { 00:03:57.231 "identify_ctrlr": false 00:03:57.231 }, 00:03:57.231 "dhchap_digests": [ 00:03:57.231 "sha256", 00:03:57.231 "sha384", 00:03:57.231 "sha512" 00:03:57.231 ], 00:03:57.231 "dhchap_dhgroups": [ 00:03:57.231 "null", 00:03:57.231 "ffdhe2048", 00:03:57.231 "ffdhe3072", 00:03:57.231 "ffdhe4096", 00:03:57.231 "ffdhe6144", 00:03:57.231 "ffdhe8192" 00:03:57.231 ] 00:03:57.231 } 00:03:57.231 }, 00:03:57.231 { 00:03:57.231 "method": "nvmf_set_max_subsystems", 00:03:57.231 "params": { 00:03:57.231 "max_subsystems": 1024 00:03:57.231 } 00:03:57.231 }, 00:03:57.231 { 00:03:57.231 "method": "nvmf_set_crdt", 00:03:57.231 "params": { 00:03:57.231 "crdt1": 0, 00:03:57.231 "crdt2": 0, 00:03:57.231 "crdt3": 0 00:03:57.231 } 00:03:57.231 }, 00:03:57.231 { 00:03:57.231 "method": "nvmf_create_transport", 00:03:57.231 "params": { 00:03:57.231 "trtype": "TCP", 00:03:57.231 "max_queue_depth": 128, 00:03:57.231 "max_io_qpairs_per_ctrlr": 127, 00:03:57.231 "in_capsule_data_size": 4096, 00:03:57.231 "max_io_size": 131072, 00:03:57.231 "io_unit_size": 131072, 00:03:57.231 "max_aq_depth": 128, 00:03:57.231 "num_shared_buffers": 511, 00:03:57.231 "buf_cache_size": 4294967295, 00:03:57.231 "dif_insert_or_strip": false, 00:03:57.231 "zcopy": false, 00:03:57.231 "c2h_success": true, 00:03:57.231 "sock_priority": 0, 00:03:57.231 "abort_timeout_sec": 1, 00:03:57.231 "ack_timeout": 0, 00:03:57.231 "data_wr_pool_size": 0 00:03:57.231 } 00:03:57.231 } 00:03:57.231 ] 00:03:57.231 }, 00:03:57.231 { 00:03:57.231 "subsystem": "iscsi", 00:03:57.231 "config": [ 00:03:57.231 { 00:03:57.231 "method": "iscsi_set_options", 00:03:57.231 "params": { 00:03:57.231 "node_base": "iqn.2016-06.io.spdk", 00:03:57.231 "max_sessions": 128, 00:03:57.231 "max_connections_per_session": 2, 00:03:57.231 "max_queue_depth": 64, 00:03:57.231 "default_time2wait": 2, 00:03:57.231 "default_time2retain": 20, 00:03:57.231 "first_burst_length": 8192, 00:03:57.231 "immediate_data": true, 00:03:57.231 "allow_duplicated_isid": false, 00:03:57.231 "error_recovery_level": 0, 00:03:57.231 "nop_timeout": 60, 00:03:57.231 "nop_in_interval": 30, 00:03:57.231 "disable_chap": false, 00:03:57.231 "require_chap": false, 00:03:57.231 "mutual_chap": false, 00:03:57.231 "chap_group": 0, 00:03:57.231 "max_large_datain_per_connection": 64, 00:03:57.231 "max_r2t_per_connection": 4, 00:03:57.231 "pdu_pool_size": 36864, 00:03:57.231 "immediate_data_pool_size": 16384, 00:03:57.231 "data_out_pool_size": 2048 00:03:57.231 } 00:03:57.231 } 00:03:57.231 ] 00:03:57.231 } 00:03:57.231 ] 00:03:57.231 } 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 432305 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 432305 ']' 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 432305 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432305 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432305' 00:03:57.231 killing process with pid 432305 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 432305 00:03:57.231 04:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 432305 00:03:57.491 04:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=432531 00:03:57.491 04:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:57.491 04:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 432531 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 432531 ']' 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 432531 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432531 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432531' 00:04:02.768 killing process with pid 432531 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 432531 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 432531 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.768 00:04:02.768 real 0m6.286s 00:04:02.768 user 0m5.988s 00:04:02.768 sys 0m0.602s 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.768 04:40:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.768 ************************************ 00:04:02.768 END TEST skip_rpc_with_json 00:04:02.768 ************************************ 00:04:03.032 04:40:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:03.032 04:40:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.032 04:40:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.032 04:40:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.032 ************************************ 00:04:03.032 START TEST skip_rpc_with_delay 00:04:03.032 ************************************ 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.032 04:40:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.032 [2024-12-10 04:40:54.001667] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:03.032 04:40:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:03.032 04:40:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.032 04:40:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:03.032 04:40:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.032 00:04:03.032 real 0m0.069s 00:04:03.032 user 0m0.040s 00:04:03.032 sys 0m0.028s 00:04:03.032 04:40:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.032 04:40:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:03.032 ************************************ 00:04:03.032 END TEST skip_rpc_with_delay 00:04:03.032 ************************************ 00:04:03.032 04:40:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:03.032 04:40:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:03.032 04:40:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:03.032 04:40:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.032 04:40:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.032 04:40:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.032 ************************************ 00:04:03.032 START TEST exit_on_failed_rpc_init 00:04:03.032 ************************************ 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=433482 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 433482 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 433482 ']' 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.032 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.032 [2024-12-10 04:40:54.136532] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:03.032 [2024-12-10 04:40:54.136575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433482 ] 00:04:03.328 [2024-12-10 04:40:54.212645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.328 [2024-12-10 04:40:54.251011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.604 [2024-12-10 04:40:54.527832] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:03.604 [2024-12-10 04:40:54.527878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433493 ] 00:04:03.604 [2024-12-10 04:40:54.598669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.604 [2024-12-10 04:40:54.637819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.604 [2024-12-10 04:40:54.637872] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:03.604 [2024-12-10 04:40:54.637881] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:03.604 [2024-12-10 04:40:54.637886] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 433482 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 433482 ']' 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 433482 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.604 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433482 00:04:03.881 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.881 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.881 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433482' 00:04:03.881 killing process with pid 433482 00:04:03.881 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 433482 00:04:03.881 04:40:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 433482 00:04:04.155 00:04:04.155 real 0m0.943s 00:04:04.155 user 0m1.019s 00:04:04.155 sys 0m0.389s 00:04:04.155 04:40:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.155 04:40:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.155 ************************************ 00:04:04.155 END TEST exit_on_failed_rpc_init 00:04:04.155 ************************************ 00:04:04.155 04:40:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.155 00:04:04.155 real 0m13.113s 00:04:04.155 user 0m12.380s 00:04:04.155 sys 0m1.572s 00:04:04.155 04:40:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.155 04:40:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.155 ************************************ 00:04:04.155 END TEST skip_rpc 00:04:04.155 ************************************ 00:04:04.155 04:40:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.155 04:40:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.155 04:40:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.155 04:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.155 ************************************ 00:04:04.155 START TEST rpc_client 00:04:04.155 ************************************ 00:04:04.155 04:40:55 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.155 * Looking for test storage... 00:04:04.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:04.155 04:40:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.155 04:40:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.155 04:40:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.467 04:40:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.467 --rc genhtml_branch_coverage=1 00:04:04.467 --rc genhtml_function_coverage=1 00:04:04.467 --rc genhtml_legend=1 00:04:04.467 --rc geninfo_all_blocks=1 00:04:04.467 --rc geninfo_unexecuted_blocks=1 00:04:04.467 00:04:04.467 ' 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.467 --rc genhtml_branch_coverage=1 00:04:04.467 --rc genhtml_function_coverage=1 00:04:04.467 --rc genhtml_legend=1 00:04:04.467 --rc geninfo_all_blocks=1 00:04:04.467 --rc geninfo_unexecuted_blocks=1 00:04:04.467 00:04:04.467 ' 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.467 --rc genhtml_branch_coverage=1 00:04:04.467 --rc genhtml_function_coverage=1 00:04:04.467 --rc genhtml_legend=1 00:04:04.467 --rc geninfo_all_blocks=1 00:04:04.467 --rc geninfo_unexecuted_blocks=1 00:04:04.467 00:04:04.467 ' 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.467 --rc genhtml_branch_coverage=1 00:04:04.467 --rc genhtml_function_coverage=1 00:04:04.467 --rc genhtml_legend=1 00:04:04.467 --rc geninfo_all_blocks=1 00:04:04.467 --rc geninfo_unexecuted_blocks=1 00:04:04.467 00:04:04.467 ' 00:04:04.467 04:40:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:04.467 OK 00:04:04.467 04:40:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.467 00:04:04.467 real 0m0.203s 00:04:04.467 user 0m0.115s 00:04:04.467 sys 0m0.101s 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.467 04:40:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:04.467 ************************************ 00:04:04.467 END TEST rpc_client 00:04:04.467 ************************************ 00:04:04.467 04:40:55 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.467 04:40:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.467 04:40:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.467 04:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.467 ************************************ 00:04:04.467 START TEST json_config 00:04:04.467 ************************************ 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.467 04:40:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.467 04:40:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.467 04:40:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.467 04:40:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.467 04:40:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.467 04:40:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:04.467 04:40:55 json_config -- scripts/common.sh@345 -- # : 1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.467 04:40:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.467 04:40:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@353 -- # local d=1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.467 04:40:55 json_config -- scripts/common.sh@355 -- # echo 1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.467 04:40:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@353 -- # local d=2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.467 04:40:55 json_config -- scripts/common.sh@355 -- # echo 2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.467 04:40:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.467 04:40:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.467 04:40:55 json_config -- scripts/common.sh@368 -- # return 0 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.467 --rc genhtml_branch_coverage=1 00:04:04.467 --rc genhtml_function_coverage=1 00:04:04.467 --rc genhtml_legend=1 00:04:04.467 --rc geninfo_all_blocks=1 00:04:04.467 --rc geninfo_unexecuted_blocks=1 00:04:04.467 00:04:04.467 ' 00:04:04.467 04:40:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.467 --rc genhtml_branch_coverage=1 00:04:04.467 --rc genhtml_function_coverage=1 00:04:04.467 --rc genhtml_legend=1 00:04:04.467 --rc geninfo_all_blocks=1 00:04:04.468 --rc geninfo_unexecuted_blocks=1 00:04:04.468 00:04:04.468 ' 00:04:04.468 04:40:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.468 --rc genhtml_branch_coverage=1 00:04:04.468 --rc genhtml_function_coverage=1 00:04:04.468 --rc genhtml_legend=1 00:04:04.468 --rc geninfo_all_blocks=1 00:04:04.468 --rc geninfo_unexecuted_blocks=1 00:04:04.468 00:04:04.468 ' 00:04:04.468 04:40:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.468 --rc genhtml_branch_coverage=1 00:04:04.468 --rc genhtml_function_coverage=1 00:04:04.468 --rc genhtml_legend=1 00:04:04.468 --rc geninfo_all_blocks=1 00:04:04.468 --rc geninfo_unexecuted_blocks=1 00:04:04.468 00:04:04.468 ' 00:04:04.468 04:40:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.468 04:40:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.752 04:40:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.752 04:40:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.752 04:40:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.752 04:40:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.752 04:40:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.752 04:40:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.752 04:40:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.752 04:40:55 json_config -- paths/export.sh@5 -- # export PATH 00:04:04.752 04:40:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@51 -- # : 0 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.752 04:40:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:04.752 INFO: JSON configuration test init 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.752 04:40:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.752 04:40:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.752 04:40:55 json_config -- json_config/common.sh@10 -- # shift 00:04:04.752 04:40:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.752 04:40:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.752 04:40:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.752 04:40:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.752 04:40:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.752 04:40:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=433854 00:04:04.752 04:40:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.752 Waiting for target to run... 00:04:04.752 04:40:55 json_config -- json_config/common.sh@25 -- # waitforlisten 433854 /var/tmp/spdk_tgt.sock 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 433854 ']' 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.752 04:40:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.752 04:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.752 [2024-12-10 04:40:55.662346] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:04.752 [2024-12-10 04:40:55.662394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433854 ] 00:04:05.011 [2024-12-10 04:40:55.942341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.011 [2024-12-10 04:40:55.973943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.579 04:40:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.579 04:40:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:05.579 04:40:56 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.579 00:04:05.579 04:40:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:05.579 04:40:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:05.579 04:40:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.579 04:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.579 04:40:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:05.579 04:40:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:05.579 04:40:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.579 04:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.579 04:40:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.579 04:40:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:05.579 04:40:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:08.871 04:40:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.871 04:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:08.871 04:40:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@54 -- # sort 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:08.871 04:40:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.871 04:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:08.871 04:40:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.871 04:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:08.871 04:40:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.871 04:40:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.130 MallocForNvmf0 00:04:09.130 04:41:00 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.130 04:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.130 MallocForNvmf1 00:04:09.389 04:41:00 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.389 04:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.389 [2024-12-10 04:41:00.463126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.389 04:41:00 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.389 04:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.647 04:41:00 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.647 04:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.906 04:41:00 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:09.906 04:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.164 04:41:01 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.164 04:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.164 [2024-12-10 04:41:01.265537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.164 04:41:01 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:10.164 04:41:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.164 04:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.423 04:41:01 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:10.423 04:41:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.423 04:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.423 04:41:01 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:10.423 04:41:01 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.423 04:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:10.423 MallocBdevForConfigChangeCheck 00:04:10.682 04:41:01 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:10.682 04:41:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.682 04:41:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.682 04:41:01 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:10.682 04:41:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.941 04:41:01 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:10.941 INFO: shutting down applications... 00:04:10.941 04:41:01 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:10.941 04:41:01 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:10.941 04:41:01 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:10.941 04:41:01 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:12.846 Calling clear_iscsi_subsystem 00:04:12.846 Calling clear_nvmf_subsystem 00:04:12.846 Calling clear_nbd_subsystem 00:04:12.846 Calling clear_ublk_subsystem 00:04:12.846 Calling clear_vhost_blk_subsystem 00:04:12.846 Calling clear_vhost_scsi_subsystem 00:04:12.846 Calling clear_bdev_subsystem 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@352 -- # break 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:12.846 04:41:03 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:12.846 04:41:03 json_config -- json_config/common.sh@31 -- # local app=target 00:04:12.846 04:41:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:12.846 04:41:03 json_config -- json_config/common.sh@35 -- # [[ -n 433854 ]] 00:04:12.846 04:41:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 433854 00:04:12.846 04:41:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:12.846 04:41:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.846 04:41:03 json_config -- json_config/common.sh@41 -- # kill -0 433854 00:04:12.846 04:41:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:13.415 04:41:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:13.415 04:41:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.415 04:41:04 json_config -- json_config/common.sh@41 -- # kill -0 433854 00:04:13.415 04:41:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:13.415 04:41:04 json_config -- json_config/common.sh@43 -- # break 00:04:13.415 04:41:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:13.415 04:41:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:13.415 SPDK target shutdown done 00:04:13.416 04:41:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:13.416 INFO: relaunching applications... 00:04:13.416 04:41:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.416 04:41:04 json_config -- json_config/common.sh@9 -- # local app=target 00:04:13.416 04:41:04 json_config -- json_config/common.sh@10 -- # shift 00:04:13.416 04:41:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.416 04:41:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.416 04:41:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.416 04:41:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.416 04:41:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.416 04:41:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=435338 00:04:13.416 04:41:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.416 Waiting for target to run... 00:04:13.416 04:41:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.416 04:41:04 json_config -- json_config/common.sh@25 -- # waitforlisten 435338 /var/tmp/spdk_tgt.sock 00:04:13.416 04:41:04 json_config -- common/autotest_common.sh@835 -- # '[' -z 435338 ']' 00:04:13.416 04:41:04 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.416 04:41:04 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.416 04:41:04 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.416 04:41:04 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.416 04:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.416 [2024-12-10 04:41:04.432850] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:13.416 [2024-12-10 04:41:04.432908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435338 ] 00:04:13.983 [2024-12-10 04:41:04.895543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.983 [2024-12-10 04:41:04.950703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.272 [2024-12-10 04:41:07.972835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.272 [2024-12-10 04:41:08.005104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.531 04:41:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.531 04:41:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:17.531 04:41:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.531 00:04:17.531 04:41:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:17.531 04:41:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:17.531 INFO: Checking if target configuration is the same... 00:04:17.531 04:41:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.531 04:41:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:17.531 04:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.531 + '[' 2 -ne 2 ']' 00:04:17.790 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.790 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.790 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.790 +++ basename /dev/fd/62 00:04:17.790 ++ mktemp /tmp/62.XXX 00:04:17.790 + tmp_file_1=/tmp/62.HlM 00:04:17.790 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.790 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.790 + tmp_file_2=/tmp/spdk_tgt_config.json.HBb 00:04:17.790 + ret=0 00:04:17.790 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.049 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.049 + diff -u /tmp/62.HlM /tmp/spdk_tgt_config.json.HBb 00:04:18.049 + echo 'INFO: JSON config files are the same' 00:04:18.049 INFO: JSON config files are the same 00:04:18.049 + rm /tmp/62.HlM /tmp/spdk_tgt_config.json.HBb 00:04:18.049 + exit 0 00:04:18.049 04:41:09 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:18.049 04:41:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:18.049 INFO: changing configuration and checking if this can be detected... 00:04:18.049 04:41:09 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.049 04:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.308 04:41:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:18.308 04:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.308 04:41:09 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.308 + '[' 2 -ne 2 ']' 00:04:18.308 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:18.308 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:18.308 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.308 +++ basename /dev/fd/62 00:04:18.308 ++ mktemp /tmp/62.XXX 00:04:18.308 + tmp_file_1=/tmp/62.IaT 00:04:18.308 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.308 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.308 + tmp_file_2=/tmp/spdk_tgt_config.json.Rvj 00:04:18.308 + ret=0 00:04:18.308 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.567 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.567 + diff -u /tmp/62.IaT /tmp/spdk_tgt_config.json.Rvj 00:04:18.567 + ret=1 00:04:18.567 + echo '=== Start of file: /tmp/62.IaT ===' 00:04:18.567 + cat /tmp/62.IaT 00:04:18.567 + echo '=== End of file: /tmp/62.IaT ===' 00:04:18.567 + echo '' 00:04:18.567 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Rvj ===' 00:04:18.567 + cat /tmp/spdk_tgt_config.json.Rvj 00:04:18.567 + echo '=== End of file: /tmp/spdk_tgt_config.json.Rvj ===' 00:04:18.567 + echo '' 00:04:18.567 + rm /tmp/62.IaT /tmp/spdk_tgt_config.json.Rvj 00:04:18.567 + exit 1 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:18.567 INFO: configuration change detected. 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:18.567 04:41:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.567 04:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 435338 ]] 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:18.567 04:41:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:18.568 04:41:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.568 04:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.568 04:41:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:18.568 04:41:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:18.568 04:41:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:18.568 04:41:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:18.568 04:41:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:18.568 04:41:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:18.568 04:41:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.568 04:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.827 04:41:09 json_config -- json_config/json_config.sh@330 -- # killprocess 435338 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 435338 ']' 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@958 -- # kill -0 435338 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@959 -- # uname 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435338 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435338' 00:04:18.827 killing process with pid 435338 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@973 -- # kill 435338 00:04:18.827 04:41:09 json_config -- common/autotest_common.sh@978 -- # wait 435338 00:04:20.205 04:41:11 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.205 04:41:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:20.205 04:41:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.205 04:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.465 04:41:11 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:20.465 04:41:11 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:20.465 INFO: Success 00:04:20.465 00:04:20.465 real 0m15.934s 00:04:20.465 user 0m16.599s 00:04:20.465 sys 0m2.603s 00:04:20.465 04:41:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.465 04:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.465 ************************************ 00:04:20.465 END TEST json_config 00:04:20.465 ************************************ 00:04:20.465 04:41:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.465 04:41:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.465 04:41:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.465 04:41:11 -- common/autotest_common.sh@10 -- # set +x 00:04:20.465 ************************************ 00:04:20.465 START TEST json_config_extra_key 00:04:20.465 ************************************ 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.465 04:41:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.465 04:41:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.465 --rc genhtml_branch_coverage=1 00:04:20.465 --rc genhtml_function_coverage=1 00:04:20.465 --rc genhtml_legend=1 00:04:20.465 --rc geninfo_all_blocks=1 00:04:20.466 --rc geninfo_unexecuted_blocks=1 00:04:20.466 00:04:20.466 ' 00:04:20.466 04:41:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.466 --rc genhtml_branch_coverage=1 00:04:20.466 --rc genhtml_function_coverage=1 00:04:20.466 --rc genhtml_legend=1 00:04:20.466 --rc geninfo_all_blocks=1 00:04:20.466 --rc geninfo_unexecuted_blocks=1 00:04:20.466 00:04:20.466 ' 00:04:20.466 04:41:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.466 --rc genhtml_branch_coverage=1 00:04:20.466 --rc genhtml_function_coverage=1 00:04:20.466 --rc genhtml_legend=1 00:04:20.466 --rc geninfo_all_blocks=1 00:04:20.466 --rc geninfo_unexecuted_blocks=1 00:04:20.466 00:04:20.466 ' 00:04:20.466 04:41:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.466 --rc genhtml_branch_coverage=1 00:04:20.466 --rc genhtml_function_coverage=1 00:04:20.466 --rc genhtml_legend=1 00:04:20.466 --rc geninfo_all_blocks=1 00:04:20.466 --rc geninfo_unexecuted_blocks=1 00:04:20.466 00:04:20.466 ' 00:04:20.466 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.466 04:41:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.466 04:41:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.466 04:41:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.466 04:41:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.466 04:41:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.466 04:41:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.466 04:41:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.466 04:41:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:20.466 04:41:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.466 04:41:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:20.726 INFO: launching applications... 00:04:20.726 04:41:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=436790 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.726 Waiting for target to run... 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 436790 /var/tmp/spdk_tgt.sock 00:04:20.726 04:41:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 436790 ']' 00:04:20.726 04:41:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.726 04:41:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.726 04:41:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.726 04:41:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.726 04:41:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.726 04:41:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.726 [2024-12-10 04:41:11.653989] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:20.726 [2024-12-10 04:41:11.654034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436790 ] 00:04:20.985 [2024-12-10 04:41:11.935089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.985 [2024-12-10 04:41:11.967527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.553 04:41:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.553 04:41:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:21.553 00:04:21.553 04:41:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:21.553 INFO: shutting down applications... 00:04:21.553 04:41:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 436790 ]] 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 436790 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 436790 00:04:21.553 04:41:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 436790 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:22.122 04:41:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:22.122 SPDK target shutdown done 00:04:22.122 04:41:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:22.122 Success 00:04:22.122 00:04:22.122 real 0m1.569s 00:04:22.122 user 0m1.350s 00:04:22.122 sys 0m0.396s 00:04:22.122 04:41:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.122 04:41:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.122 ************************************ 00:04:22.122 END TEST json_config_extra_key 00:04:22.122 ************************************ 00:04:22.122 04:41:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:22.122 04:41:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.122 04:41:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.122 04:41:13 -- common/autotest_common.sh@10 -- # set +x 00:04:22.122 ************************************ 00:04:22.122 START TEST alias_rpc 00:04:22.122 ************************************ 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:22.122 * Looking for test storage... 00:04:22.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.122 04:41:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.122 --rc genhtml_branch_coverage=1 00:04:22.122 --rc genhtml_function_coverage=1 00:04:22.122 --rc genhtml_legend=1 00:04:22.122 --rc geninfo_all_blocks=1 00:04:22.122 --rc geninfo_unexecuted_blocks=1 00:04:22.122 00:04:22.122 ' 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.122 --rc genhtml_branch_coverage=1 00:04:22.122 --rc genhtml_function_coverage=1 00:04:22.122 --rc genhtml_legend=1 00:04:22.122 --rc geninfo_all_blocks=1 00:04:22.122 --rc geninfo_unexecuted_blocks=1 00:04:22.122 00:04:22.122 ' 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.122 --rc genhtml_branch_coverage=1 00:04:22.122 --rc genhtml_function_coverage=1 00:04:22.122 --rc genhtml_legend=1 00:04:22.122 --rc geninfo_all_blocks=1 00:04:22.122 --rc geninfo_unexecuted_blocks=1 00:04:22.122 00:04:22.122 ' 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.122 --rc genhtml_branch_coverage=1 00:04:22.122 --rc genhtml_function_coverage=1 00:04:22.122 --rc genhtml_legend=1 00:04:22.122 --rc geninfo_all_blocks=1 00:04:22.122 --rc geninfo_unexecuted_blocks=1 00:04:22.122 00:04:22.122 ' 00:04:22.122 04:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:22.122 04:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.122 04:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=437078 00:04:22.122 04:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 437078 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 437078 ']' 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.122 04:41:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.382 [2024-12-10 04:41:13.269823] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:22.382 [2024-12-10 04:41:13.269883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437078 ] 00:04:22.382 [2024-12-10 04:41:13.342943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.382 [2024-12-10 04:41:13.383736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.640 04:41:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.640 04:41:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.640 04:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:22.900 04:41:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 437078 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 437078 ']' 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 437078 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437078 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437078' 00:04:22.900 killing process with pid 437078 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 437078 00:04:22.900 04:41:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 437078 00:04:23.159 00:04:23.159 real 0m1.114s 00:04:23.159 user 0m1.139s 00:04:23.159 sys 0m0.399s 00:04:23.159 04:41:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.159 04:41:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.159 ************************************ 00:04:23.159 END TEST alias_rpc 00:04:23.159 ************************************ 00:04:23.159 04:41:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:23.159 04:41:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:23.159 04:41:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.159 04:41:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.159 04:41:14 -- common/autotest_common.sh@10 -- # set +x 00:04:23.159 ************************************ 00:04:23.159 START TEST spdkcli_tcp 00:04:23.159 ************************************ 00:04:23.159 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:23.418 * Looking for test storage... 00:04:23.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:23.418 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.418 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.418 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.418 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.418 04:41:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.419 04:41:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.419 --rc genhtml_branch_coverage=1 00:04:23.419 --rc genhtml_function_coverage=1 00:04:23.419 --rc genhtml_legend=1 00:04:23.419 --rc geninfo_all_blocks=1 00:04:23.419 --rc geninfo_unexecuted_blocks=1 00:04:23.419 00:04:23.419 ' 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.419 --rc genhtml_branch_coverage=1 00:04:23.419 --rc genhtml_function_coverage=1 00:04:23.419 --rc genhtml_legend=1 00:04:23.419 --rc geninfo_all_blocks=1 00:04:23.419 --rc geninfo_unexecuted_blocks=1 00:04:23.419 00:04:23.419 ' 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.419 --rc genhtml_branch_coverage=1 00:04:23.419 --rc genhtml_function_coverage=1 00:04:23.419 --rc genhtml_legend=1 00:04:23.419 --rc geninfo_all_blocks=1 00:04:23.419 --rc geninfo_unexecuted_blocks=1 00:04:23.419 00:04:23.419 ' 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.419 --rc genhtml_branch_coverage=1 00:04:23.419 --rc genhtml_function_coverage=1 00:04:23.419 --rc genhtml_legend=1 00:04:23.419 --rc geninfo_all_blocks=1 00:04:23.419 --rc geninfo_unexecuted_blocks=1 00:04:23.419 00:04:23.419 ' 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=437359 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 437359 00:04:23.419 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 437359 ']' 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.419 04:41:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.419 [2024-12-10 04:41:14.467251] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:23.419 [2024-12-10 04:41:14.467299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437359 ] 00:04:23.419 [2024-12-10 04:41:14.541676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.678 [2024-12-10 04:41:14.581949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.678 [2024-12-10 04:41:14.581950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.678 04:41:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.678 04:41:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:23.678 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=437363 00:04:23.678 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:23.678 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:23.937 [ 00:04:23.937 "bdev_malloc_delete", 00:04:23.937 "bdev_malloc_create", 00:04:23.937 "bdev_null_resize", 00:04:23.937 "bdev_null_delete", 00:04:23.937 "bdev_null_create", 00:04:23.937 "bdev_nvme_cuse_unregister", 00:04:23.937 "bdev_nvme_cuse_register", 00:04:23.937 "bdev_opal_new_user", 00:04:23.937 "bdev_opal_set_lock_state", 00:04:23.937 "bdev_opal_delete", 00:04:23.937 "bdev_opal_get_info", 00:04:23.937 "bdev_opal_create", 00:04:23.937 "bdev_nvme_opal_revert", 00:04:23.937 "bdev_nvme_opal_init", 00:04:23.937 "bdev_nvme_send_cmd", 00:04:23.937 "bdev_nvme_set_keys", 00:04:23.937 "bdev_nvme_get_path_iostat", 00:04:23.937 "bdev_nvme_get_mdns_discovery_info", 00:04:23.937 "bdev_nvme_stop_mdns_discovery", 00:04:23.937 "bdev_nvme_start_mdns_discovery", 00:04:23.937 "bdev_nvme_set_multipath_policy", 00:04:23.937 "bdev_nvme_set_preferred_path", 00:04:23.937 "bdev_nvme_get_io_paths", 00:04:23.937 "bdev_nvme_remove_error_injection", 00:04:23.937 "bdev_nvme_add_error_injection", 00:04:23.937 "bdev_nvme_get_discovery_info", 00:04:23.937 "bdev_nvme_stop_discovery", 00:04:23.937 "bdev_nvme_start_discovery", 00:04:23.937 "bdev_nvme_get_controller_health_info", 00:04:23.937 "bdev_nvme_disable_controller", 00:04:23.937 "bdev_nvme_enable_controller", 00:04:23.937 "bdev_nvme_reset_controller", 00:04:23.937 "bdev_nvme_get_transport_statistics", 00:04:23.937 "bdev_nvme_apply_firmware", 00:04:23.937 "bdev_nvme_detach_controller", 00:04:23.937 "bdev_nvme_get_controllers", 00:04:23.937 "bdev_nvme_attach_controller", 00:04:23.937 "bdev_nvme_set_hotplug", 00:04:23.937 "bdev_nvme_set_options", 00:04:23.937 "bdev_passthru_delete", 00:04:23.937 "bdev_passthru_create", 00:04:23.937 "bdev_lvol_set_parent_bdev", 00:04:23.937 "bdev_lvol_set_parent", 00:04:23.937 "bdev_lvol_check_shallow_copy", 00:04:23.937 "bdev_lvol_start_shallow_copy", 00:04:23.937 "bdev_lvol_grow_lvstore", 00:04:23.937 "bdev_lvol_get_lvols", 00:04:23.937 "bdev_lvol_get_lvstores", 00:04:23.937 "bdev_lvol_delete", 00:04:23.937 "bdev_lvol_set_read_only", 00:04:23.937 "bdev_lvol_resize", 00:04:23.937 "bdev_lvol_decouple_parent", 00:04:23.937 "bdev_lvol_inflate", 00:04:23.937 "bdev_lvol_rename", 00:04:23.937 "bdev_lvol_clone_bdev", 00:04:23.937 "bdev_lvol_clone", 00:04:23.937 "bdev_lvol_snapshot", 00:04:23.937 "bdev_lvol_create", 00:04:23.937 "bdev_lvol_delete_lvstore", 00:04:23.937 "bdev_lvol_rename_lvstore", 00:04:23.937 "bdev_lvol_create_lvstore", 00:04:23.937 "bdev_raid_set_options", 00:04:23.937 "bdev_raid_remove_base_bdev", 00:04:23.937 "bdev_raid_add_base_bdev", 00:04:23.937 "bdev_raid_delete", 00:04:23.937 "bdev_raid_create", 00:04:23.937 "bdev_raid_get_bdevs", 00:04:23.937 "bdev_error_inject_error", 00:04:23.937 "bdev_error_delete", 00:04:23.937 "bdev_error_create", 00:04:23.937 "bdev_split_delete", 00:04:23.937 "bdev_split_create", 00:04:23.937 "bdev_delay_delete", 00:04:23.937 "bdev_delay_create", 00:04:23.937 "bdev_delay_update_latency", 00:04:23.937 "bdev_zone_block_delete", 00:04:23.937 "bdev_zone_block_create", 00:04:23.937 "blobfs_create", 00:04:23.937 "blobfs_detect", 00:04:23.937 "blobfs_set_cache_size", 00:04:23.937 "bdev_aio_delete", 00:04:23.937 "bdev_aio_rescan", 00:04:23.937 "bdev_aio_create", 00:04:23.937 "bdev_ftl_set_property", 00:04:23.937 "bdev_ftl_get_properties", 00:04:23.937 "bdev_ftl_get_stats", 00:04:23.937 "bdev_ftl_unmap", 00:04:23.937 "bdev_ftl_unload", 00:04:23.937 "bdev_ftl_delete", 00:04:23.937 "bdev_ftl_load", 00:04:23.937 "bdev_ftl_create", 00:04:23.937 "bdev_virtio_attach_controller", 00:04:23.937 "bdev_virtio_scsi_get_devices", 00:04:23.937 "bdev_virtio_detach_controller", 00:04:23.937 "bdev_virtio_blk_set_hotplug", 00:04:23.937 "bdev_iscsi_delete", 00:04:23.938 "bdev_iscsi_create", 00:04:23.938 "bdev_iscsi_set_options", 00:04:23.938 "accel_error_inject_error", 00:04:23.938 "ioat_scan_accel_module", 00:04:23.938 "dsa_scan_accel_module", 00:04:23.938 "iaa_scan_accel_module", 00:04:23.938 "vfu_virtio_create_fs_endpoint", 00:04:23.938 "vfu_virtio_create_scsi_endpoint", 00:04:23.938 "vfu_virtio_scsi_remove_target", 00:04:23.938 "vfu_virtio_scsi_add_target", 00:04:23.938 "vfu_virtio_create_blk_endpoint", 00:04:23.938 "vfu_virtio_delete_endpoint", 00:04:23.938 "keyring_file_remove_key", 00:04:23.938 "keyring_file_add_key", 00:04:23.938 "keyring_linux_set_options", 00:04:23.938 "fsdev_aio_delete", 00:04:23.938 "fsdev_aio_create", 00:04:23.938 "iscsi_get_histogram", 00:04:23.938 "iscsi_enable_histogram", 00:04:23.938 "iscsi_set_options", 00:04:23.938 "iscsi_get_auth_groups", 00:04:23.938 "iscsi_auth_group_remove_secret", 00:04:23.938 "iscsi_auth_group_add_secret", 00:04:23.938 "iscsi_delete_auth_group", 00:04:23.938 "iscsi_create_auth_group", 00:04:23.938 "iscsi_set_discovery_auth", 00:04:23.938 "iscsi_get_options", 00:04:23.938 "iscsi_target_node_request_logout", 00:04:23.938 "iscsi_target_node_set_redirect", 00:04:23.938 "iscsi_target_node_set_auth", 00:04:23.938 "iscsi_target_node_add_lun", 00:04:23.938 "iscsi_get_stats", 00:04:23.938 "iscsi_get_connections", 00:04:23.938 "iscsi_portal_group_set_auth", 00:04:23.938 "iscsi_start_portal_group", 00:04:23.938 "iscsi_delete_portal_group", 00:04:23.938 "iscsi_create_portal_group", 00:04:23.938 "iscsi_get_portal_groups", 00:04:23.938 "iscsi_delete_target_node", 00:04:23.938 "iscsi_target_node_remove_pg_ig_maps", 00:04:23.938 "iscsi_target_node_add_pg_ig_maps", 00:04:23.938 "iscsi_create_target_node", 00:04:23.938 "iscsi_get_target_nodes", 00:04:23.938 "iscsi_delete_initiator_group", 00:04:23.938 "iscsi_initiator_group_remove_initiators", 00:04:23.938 "iscsi_initiator_group_add_initiators", 00:04:23.938 "iscsi_create_initiator_group", 00:04:23.938 "iscsi_get_initiator_groups", 00:04:23.938 "nvmf_set_crdt", 00:04:23.938 "nvmf_set_config", 00:04:23.938 "nvmf_set_max_subsystems", 00:04:23.938 "nvmf_stop_mdns_prr", 00:04:23.938 "nvmf_publish_mdns_prr", 00:04:23.938 "nvmf_subsystem_get_listeners", 00:04:23.938 "nvmf_subsystem_get_qpairs", 00:04:23.938 "nvmf_subsystem_get_controllers", 00:04:23.938 "nvmf_get_stats", 00:04:23.938 "nvmf_get_transports", 00:04:23.938 "nvmf_create_transport", 00:04:23.938 "nvmf_get_targets", 00:04:23.938 "nvmf_delete_target", 00:04:23.938 "nvmf_create_target", 00:04:23.938 "nvmf_subsystem_allow_any_host", 00:04:23.938 "nvmf_subsystem_set_keys", 00:04:23.938 "nvmf_subsystem_remove_host", 00:04:23.938 "nvmf_subsystem_add_host", 00:04:23.938 "nvmf_ns_remove_host", 00:04:23.938 "nvmf_ns_add_host", 00:04:23.938 "nvmf_subsystem_remove_ns", 00:04:23.938 "nvmf_subsystem_set_ns_ana_group", 00:04:23.938 "nvmf_subsystem_add_ns", 00:04:23.938 "nvmf_subsystem_listener_set_ana_state", 00:04:23.938 "nvmf_discovery_get_referrals", 00:04:23.938 "nvmf_discovery_remove_referral", 00:04:23.938 "nvmf_discovery_add_referral", 00:04:23.938 "nvmf_subsystem_remove_listener", 00:04:23.938 "nvmf_subsystem_add_listener", 00:04:23.938 "nvmf_delete_subsystem", 00:04:23.938 "nvmf_create_subsystem", 00:04:23.938 "nvmf_get_subsystems", 00:04:23.938 "env_dpdk_get_mem_stats", 00:04:23.938 "nbd_get_disks", 00:04:23.938 "nbd_stop_disk", 00:04:23.938 "nbd_start_disk", 00:04:23.938 "ublk_recover_disk", 00:04:23.938 "ublk_get_disks", 00:04:23.938 "ublk_stop_disk", 00:04:23.938 "ublk_start_disk", 00:04:23.938 "ublk_destroy_target", 00:04:23.938 "ublk_create_target", 00:04:23.938 "virtio_blk_create_transport", 00:04:23.938 "virtio_blk_get_transports", 00:04:23.938 "vhost_controller_set_coalescing", 00:04:23.938 "vhost_get_controllers", 00:04:23.938 "vhost_delete_controller", 00:04:23.938 "vhost_create_blk_controller", 00:04:23.938 "vhost_scsi_controller_remove_target", 00:04:23.938 "vhost_scsi_controller_add_target", 00:04:23.938 "vhost_start_scsi_controller", 00:04:23.938 "vhost_create_scsi_controller", 00:04:23.938 "thread_set_cpumask", 00:04:23.938 "scheduler_set_options", 00:04:23.938 "framework_get_governor", 00:04:23.938 "framework_get_scheduler", 00:04:23.938 "framework_set_scheduler", 00:04:23.938 "framework_get_reactors", 00:04:23.938 "thread_get_io_channels", 00:04:23.938 "thread_get_pollers", 00:04:23.938 "thread_get_stats", 00:04:23.938 "framework_monitor_context_switch", 00:04:23.938 "spdk_kill_instance", 00:04:23.938 "log_enable_timestamps", 00:04:23.938 "log_get_flags", 00:04:23.938 "log_clear_flag", 00:04:23.938 "log_set_flag", 00:04:23.938 "log_get_level", 00:04:23.938 "log_set_level", 00:04:23.938 "log_get_print_level", 00:04:23.938 "log_set_print_level", 00:04:23.938 "framework_enable_cpumask_locks", 00:04:23.938 "framework_disable_cpumask_locks", 00:04:23.938 "framework_wait_init", 00:04:23.938 "framework_start_init", 00:04:23.938 "scsi_get_devices", 00:04:23.938 "bdev_get_histogram", 00:04:23.938 "bdev_enable_histogram", 00:04:23.938 "bdev_set_qos_limit", 00:04:23.938 "bdev_set_qd_sampling_period", 00:04:23.938 "bdev_get_bdevs", 00:04:23.938 "bdev_reset_iostat", 00:04:23.938 "bdev_get_iostat", 00:04:23.938 "bdev_examine", 00:04:23.938 "bdev_wait_for_examine", 00:04:23.938 "bdev_set_options", 00:04:23.938 "accel_get_stats", 00:04:23.938 "accel_set_options", 00:04:23.938 "accel_set_driver", 00:04:23.938 "accel_crypto_key_destroy", 00:04:23.938 "accel_crypto_keys_get", 00:04:23.938 "accel_crypto_key_create", 00:04:23.938 "accel_assign_opc", 00:04:23.938 "accel_get_module_info", 00:04:23.938 "accel_get_opc_assignments", 00:04:23.938 "vmd_rescan", 00:04:23.938 "vmd_remove_device", 00:04:23.938 "vmd_enable", 00:04:23.938 "sock_get_default_impl", 00:04:23.938 "sock_set_default_impl", 00:04:23.938 "sock_impl_set_options", 00:04:23.938 "sock_impl_get_options", 00:04:23.938 "iobuf_get_stats", 00:04:23.938 "iobuf_set_options", 00:04:23.938 "keyring_get_keys", 00:04:23.938 "vfu_tgt_set_base_path", 00:04:23.938 "framework_get_pci_devices", 00:04:23.938 "framework_get_config", 00:04:23.938 "framework_get_subsystems", 00:04:23.938 "fsdev_set_opts", 00:04:23.938 "fsdev_get_opts", 00:04:23.938 "trace_get_info", 00:04:23.938 "trace_get_tpoint_group_mask", 00:04:23.938 "trace_disable_tpoint_group", 00:04:23.938 "trace_enable_tpoint_group", 00:04:23.938 "trace_clear_tpoint_mask", 00:04:23.938 "trace_set_tpoint_mask", 00:04:23.938 "notify_get_notifications", 00:04:23.938 "notify_get_types", 00:04:23.938 "spdk_get_version", 00:04:23.938 "rpc_get_methods" 00:04:23.938 ] 00:04:23.938 04:41:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:23.938 04:41:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.938 04:41:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.938 04:41:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:23.938 04:41:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 437359 00:04:23.938 04:41:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 437359 ']' 00:04:23.938 04:41:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 437359 00:04:23.938 04:41:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:23.938 04:41:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.938 04:41:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437359 00:04:24.196 04:41:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.196 04:41:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.196 04:41:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437359' 00:04:24.196 killing process with pid 437359 00:04:24.196 04:41:15 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 437359 00:04:24.196 04:41:15 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 437359 00:04:24.455 00:04:24.455 real 0m1.143s 00:04:24.455 user 0m1.918s 00:04:24.455 sys 0m0.449s 00:04:24.455 04:41:15 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.455 04:41:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.455 ************************************ 00:04:24.455 END TEST spdkcli_tcp 00:04:24.455 ************************************ 00:04:24.455 04:41:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.455 04:41:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.455 04:41:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.455 04:41:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.455 ************************************ 00:04:24.455 START TEST dpdk_mem_utility 00:04:24.455 ************************************ 00:04:24.455 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.455 * Looking for test storage... 00:04:24.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:24.455 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.455 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.455 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.714 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.714 04:41:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:24.714 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.715 --rc genhtml_branch_coverage=1 00:04:24.715 --rc genhtml_function_coverage=1 00:04:24.715 --rc genhtml_legend=1 00:04:24.715 --rc geninfo_all_blocks=1 00:04:24.715 --rc geninfo_unexecuted_blocks=1 00:04:24.715 00:04:24.715 ' 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.715 --rc genhtml_branch_coverage=1 00:04:24.715 --rc genhtml_function_coverage=1 00:04:24.715 --rc genhtml_legend=1 00:04:24.715 --rc geninfo_all_blocks=1 00:04:24.715 --rc geninfo_unexecuted_blocks=1 00:04:24.715 00:04:24.715 ' 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.715 --rc genhtml_branch_coverage=1 00:04:24.715 --rc genhtml_function_coverage=1 00:04:24.715 --rc genhtml_legend=1 00:04:24.715 --rc geninfo_all_blocks=1 00:04:24.715 --rc geninfo_unexecuted_blocks=1 00:04:24.715 00:04:24.715 ' 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.715 --rc genhtml_branch_coverage=1 00:04:24.715 --rc genhtml_function_coverage=1 00:04:24.715 --rc genhtml_legend=1 00:04:24.715 --rc geninfo_all_blocks=1 00:04:24.715 --rc geninfo_unexecuted_blocks=1 00:04:24.715 00:04:24.715 ' 00:04:24.715 04:41:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.715 04:41:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=437657 00:04:24.715 04:41:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 437657 00:04:24.715 04:41:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 437657 ']' 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.715 04:41:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.715 [2024-12-10 04:41:15.678989] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:24.715 [2024-12-10 04:41:15.679036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437657 ] 00:04:24.715 [2024-12-10 04:41:15.754458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.715 [2024-12-10 04:41:15.794070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.024 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.024 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:25.024 04:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:25.024 04:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:25.024 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.024 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.024 { 00:04:25.024 "filename": "/tmp/spdk_mem_dump.txt" 00:04:25.024 } 00:04:25.024 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.024 04:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:25.024 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:25.024 1 heaps totaling size 818.000000 MiB 00:04:25.024 size: 818.000000 MiB heap id: 0 00:04:25.024 end heaps---------- 00:04:25.024 9 mempools totaling size 603.782043 MiB 00:04:25.024 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:25.024 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:25.024 size: 100.555481 MiB name: bdev_io_437657 00:04:25.024 size: 50.003479 MiB name: msgpool_437657 00:04:25.024 size: 36.509338 MiB name: fsdev_io_437657 00:04:25.024 size: 21.763794 MiB name: PDU_Pool 00:04:25.024 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:25.024 size: 4.133484 MiB name: evtpool_437657 00:04:25.024 size: 0.026123 MiB name: Session_Pool 00:04:25.024 end mempools------- 00:04:25.024 6 memzones totaling size 4.142822 MiB 00:04:25.024 size: 1.000366 MiB name: RG_ring_0_437657 00:04:25.024 size: 1.000366 MiB name: RG_ring_1_437657 00:04:25.024 size: 1.000366 MiB name: RG_ring_4_437657 00:04:25.024 size: 1.000366 MiB name: RG_ring_5_437657 00:04:25.024 size: 0.125366 MiB name: RG_ring_2_437657 00:04:25.024 size: 0.015991 MiB name: RG_ring_3_437657 00:04:25.024 end memzones------- 00:04:25.024 04:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:25.284 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:25.284 list of free elements. size: 10.852478 MiB 00:04:25.284 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:25.284 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:25.284 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:25.284 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:25.284 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:25.284 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:25.284 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:25.284 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:25.284 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:25.284 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:25.284 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:25.284 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:25.284 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:25.284 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:25.284 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:25.284 list of standard malloc elements. size: 199.218628 MiB 00:04:25.284 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:25.284 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:25.284 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:25.284 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:25.284 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:25.284 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:25.284 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:25.284 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:25.284 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:25.284 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:25.284 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:25.284 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:25.284 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:25.284 list of memzone associated elements. size: 607.928894 MiB 00:04:25.284 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:25.284 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:25.284 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:25.284 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:25.284 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:25.284 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_437657_0 00:04:25.284 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:25.284 associated memzone info: size: 48.002930 MiB name: MP_msgpool_437657_0 00:04:25.284 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:25.284 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_437657_0 00:04:25.284 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:25.284 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:25.284 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:25.284 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:25.284 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:25.284 associated memzone info: size: 3.000122 MiB name: MP_evtpool_437657_0 00:04:25.284 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:25.284 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_437657 00:04:25.284 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:25.284 associated memzone info: size: 1.007996 MiB name: MP_evtpool_437657 00:04:25.284 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:25.284 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:25.284 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:25.284 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:25.284 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:25.284 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:25.284 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:25.284 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:25.284 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:25.284 associated memzone info: size: 1.000366 MiB name: RG_ring_0_437657 00:04:25.284 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:25.284 associated memzone info: size: 1.000366 MiB name: RG_ring_1_437657 00:04:25.284 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:25.284 associated memzone info: size: 1.000366 MiB name: RG_ring_4_437657 00:04:25.284 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:25.284 associated memzone info: size: 1.000366 MiB name: RG_ring_5_437657 00:04:25.284 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:25.284 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_437657 00:04:25.284 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:25.284 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_437657 00:04:25.284 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:25.284 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:25.285 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:25.285 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:25.285 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:25.285 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:25.285 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:25.285 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_437657 00:04:25.285 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:25.285 associated memzone info: size: 0.125366 MiB name: RG_ring_2_437657 00:04:25.285 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:25.285 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:25.285 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:25.285 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:25.285 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:25.285 associated memzone info: size: 0.015991 MiB name: RG_ring_3_437657 00:04:25.285 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:25.285 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:25.285 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:25.285 associated memzone info: size: 0.000183 MiB name: MP_msgpool_437657 00:04:25.285 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:25.285 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_437657 00:04:25.285 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:25.285 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_437657 00:04:25.285 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:25.285 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:25.285 04:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:25.285 04:41:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 437657 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 437657 ']' 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 437657 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437657 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437657' 00:04:25.285 killing process with pid 437657 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 437657 00:04:25.285 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 437657 00:04:25.544 00:04:25.544 real 0m1.012s 00:04:25.544 user 0m0.917s 00:04:25.544 sys 0m0.433s 00:04:25.544 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.544 04:41:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.544 ************************************ 00:04:25.544 END TEST dpdk_mem_utility 00:04:25.544 ************************************ 00:04:25.544 04:41:16 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.544 04:41:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.544 04:41:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.544 04:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.544 ************************************ 00:04:25.544 START TEST event 00:04:25.544 ************************************ 00:04:25.544 04:41:16 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.544 * Looking for test storage... 00:04:25.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:25.544 04:41:16 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:25.545 04:41:16 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:25.545 04:41:16 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:25.804 04:41:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.804 04:41:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.804 04:41:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.804 04:41:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.804 04:41:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.804 04:41:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.804 04:41:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.804 04:41:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.804 04:41:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.804 04:41:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.804 04:41:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.804 04:41:16 event -- scripts/common.sh@344 -- # case "$op" in 00:04:25.804 04:41:16 event -- scripts/common.sh@345 -- # : 1 00:04:25.804 04:41:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.804 04:41:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.804 04:41:16 event -- scripts/common.sh@365 -- # decimal 1 00:04:25.804 04:41:16 event -- scripts/common.sh@353 -- # local d=1 00:04:25.804 04:41:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.804 04:41:16 event -- scripts/common.sh@355 -- # echo 1 00:04:25.804 04:41:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.804 04:41:16 event -- scripts/common.sh@366 -- # decimal 2 00:04:25.804 04:41:16 event -- scripts/common.sh@353 -- # local d=2 00:04:25.804 04:41:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.804 04:41:16 event -- scripts/common.sh@355 -- # echo 2 00:04:25.804 04:41:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.804 04:41:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.804 04:41:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.804 04:41:16 event -- scripts/common.sh@368 -- # return 0 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 04:41:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:25.804 04:41:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:25.804 04:41:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:25.804 04:41:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.804 04:41:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.804 ************************************ 00:04:25.804 START TEST event_perf 00:04:25.804 ************************************ 00:04:25.804 04:41:16 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.804 Running I/O for 1 seconds...[2024-12-10 04:41:16.760785] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:25.804 [2024-12-10 04:41:16.760856] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437843 ] 00:04:25.804 [2024-12-10 04:41:16.838864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.804 [2024-12-10 04:41:16.881527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.804 [2024-12-10 04:41:16.881637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.804 [2024-12-10 04:41:16.881721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.804 [2024-12-10 04:41:16.881722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.183 Running I/O for 1 seconds... 00:04:27.183 lcore 0: 203039 00:04:27.183 lcore 1: 203040 00:04:27.183 lcore 2: 203039 00:04:27.183 lcore 3: 203039 00:04:27.183 done. 00:04:27.183 00:04:27.183 real 0m1.181s 00:04:27.183 user 0m4.091s 00:04:27.183 sys 0m0.087s 00:04:27.183 04:41:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.183 04:41:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:27.183 ************************************ 00:04:27.183 END TEST event_perf 00:04:27.183 ************************************ 00:04:27.183 04:41:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:27.183 04:41:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:27.183 04:41:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.183 04:41:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.183 ************************************ 00:04:27.183 START TEST event_reactor 00:04:27.183 ************************************ 00:04:27.183 04:41:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:27.183 [2024-12-10 04:41:18.012270] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:27.183 [2024-12-10 04:41:18.012339] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438029 ] 00:04:27.183 [2024-12-10 04:41:18.091764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.183 [2024-12-10 04:41:18.132382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.121 test_start 00:04:28.121 oneshot 00:04:28.121 tick 100 00:04:28.121 tick 100 00:04:28.121 tick 250 00:04:28.121 tick 100 00:04:28.121 tick 100 00:04:28.121 tick 250 00:04:28.121 tick 100 00:04:28.121 tick 500 00:04:28.121 tick 100 00:04:28.121 tick 100 00:04:28.121 tick 250 00:04:28.121 tick 100 00:04:28.121 tick 100 00:04:28.121 test_end 00:04:28.121 00:04:28.121 real 0m1.178s 00:04:28.121 user 0m1.100s 00:04:28.121 sys 0m0.074s 00:04:28.121 04:41:19 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.121 04:41:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:28.121 ************************************ 00:04:28.121 END TEST event_reactor 00:04:28.121 ************************************ 00:04:28.121 04:41:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.121 04:41:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:28.121 04:41:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.121 04:41:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.121 ************************************ 00:04:28.121 START TEST event_reactor_perf 00:04:28.121 ************************************ 00:04:28.121 04:41:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.379 [2024-12-10 04:41:19.259463] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:28.379 [2024-12-10 04:41:19.259535] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438228 ] 00:04:28.380 [2024-12-10 04:41:19.340319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.380 [2024-12-10 04:41:19.382876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.316 test_start 00:04:29.316 test_end 00:04:29.316 Performance: 506528 events per second 00:04:29.316 00:04:29.316 real 0m1.185s 00:04:29.316 user 0m1.106s 00:04:29.316 sys 0m0.074s 00:04:29.316 04:41:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.316 04:41:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.316 ************************************ 00:04:29.316 END TEST event_reactor_perf 00:04:29.316 ************************************ 00:04:29.575 04:41:20 event -- event/event.sh@49 -- # uname -s 00:04:29.575 04:41:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:29.575 04:41:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.575 04:41:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.575 04:41:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.575 04:41:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.575 ************************************ 00:04:29.575 START TEST event_scheduler 00:04:29.575 ************************************ 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.575 * Looking for test storage... 00:04:29.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.575 04:41:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.575 --rc genhtml_branch_coverage=1 00:04:29.575 --rc genhtml_function_coverage=1 00:04:29.575 --rc genhtml_legend=1 00:04:29.575 --rc geninfo_all_blocks=1 00:04:29.575 --rc geninfo_unexecuted_blocks=1 00:04:29.575 00:04:29.575 ' 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.575 --rc genhtml_branch_coverage=1 00:04:29.575 --rc genhtml_function_coverage=1 00:04:29.575 --rc genhtml_legend=1 00:04:29.575 --rc geninfo_all_blocks=1 00:04:29.575 --rc geninfo_unexecuted_blocks=1 00:04:29.575 00:04:29.575 ' 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.575 --rc genhtml_branch_coverage=1 00:04:29.575 --rc genhtml_function_coverage=1 00:04:29.575 --rc genhtml_legend=1 00:04:29.575 --rc geninfo_all_blocks=1 00:04:29.575 --rc geninfo_unexecuted_blocks=1 00:04:29.575 00:04:29.575 ' 00:04:29.575 04:41:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.575 --rc genhtml_branch_coverage=1 00:04:29.575 --rc genhtml_function_coverage=1 00:04:29.575 --rc genhtml_legend=1 00:04:29.575 --rc geninfo_all_blocks=1 00:04:29.576 --rc geninfo_unexecuted_blocks=1 00:04:29.576 00:04:29.576 ' 00:04:29.576 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:29.576 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=438543 00:04:29.576 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:29.576 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.576 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 438543 00:04:29.576 04:41:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 438543 ']' 00:04:29.576 04:41:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.576 04:41:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.576 04:41:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.576 04:41:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.576 04:41:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.835 [2024-12-10 04:41:20.720000] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:29.835 [2024-12-10 04:41:20.720050] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438543 ] 00:04:29.835 [2024-12-10 04:41:20.793585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.835 [2024-12-10 04:41:20.837795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.835 [2024-12-10 04:41:20.837905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.835 [2024-12-10 04:41:20.837932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.835 [2024-12-10 04:41:20.837933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:29.835 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.835 [2024-12-10 04:41:20.882589] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:29.835 [2024-12-10 04:41:20.882605] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:29.835 [2024-12-10 04:41:20.882614] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:29.835 [2024-12-10 04:41:20.882619] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:29.835 [2024-12-10 04:41:20.882624] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.835 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.835 [2024-12-10 04:41:20.957562] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.835 04:41:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.835 04:41:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 ************************************ 00:04:30.095 START TEST scheduler_create_thread 00:04:30.095 ************************************ 00:04:30.095 04:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:30.095 04:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:30.095 04:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 2 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 3 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 4 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 5 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 6 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 7 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 8 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 9 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 10 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.095 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.663 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.664 04:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:30.664 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.664 04:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.041 04:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.041 04:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:32.041 04:41:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:32.041 04:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.041 04:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.978 04:41:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.978 00:04:32.978 real 0m3.099s 00:04:32.978 user 0m0.020s 00:04:32.978 sys 0m0.008s 00:04:32.978 04:41:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.978 04:41:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.978 ************************************ 00:04:32.978 END TEST scheduler_create_thread 00:04:32.978 ************************************ 00:04:33.236 04:41:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:33.236 04:41:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 438543 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 438543 ']' 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 438543 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438543 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438543' 00:04:33.236 killing process with pid 438543 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 438543 00:04:33.236 04:41:24 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 438543 00:04:33.495 [2024-12-10 04:41:24.472574] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:33.755 00:04:33.755 real 0m4.162s 00:04:33.755 user 0m6.647s 00:04:33.755 sys 0m0.382s 00:04:33.755 04:41:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.755 04:41:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.755 ************************************ 00:04:33.755 END TEST event_scheduler 00:04:33.755 ************************************ 00:04:33.755 04:41:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:33.755 04:41:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:33.755 04:41:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.755 04:41:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.755 04:41:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.755 ************************************ 00:04:33.755 START TEST app_repeat 00:04:33.755 ************************************ 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=439279 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 439279' 00:04:33.755 Process app_repeat pid: 439279 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:33.755 spdk_app_start Round 0 00:04:33.755 04:41:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 439279 /var/tmp/spdk-nbd.sock 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 439279 ']' 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.755 04:41:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.755 [2024-12-10 04:41:24.772963] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:33.755 [2024-12-10 04:41:24.773015] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439279 ] 00:04:33.755 [2024-12-10 04:41:24.846663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.014 [2024-12-10 04:41:24.889069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.014 [2024-12-10 04:41:24.889070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.014 04:41:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.014 04:41:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:34.014 04:41:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.307 Malloc0 00:04:34.307 04:41:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.307 Malloc1 00:04:34.307 04:41:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.307 04:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.632 /dev/nbd0 00:04:34.632 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.632 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.632 04:41:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.632 1+0 records in 00:04:34.632 1+0 records out 00:04:34.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200066 s, 20.5 MB/s 00:04:34.633 04:41:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.633 04:41:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.633 04:41:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.633 04:41:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.633 04:41:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.633 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.633 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.633 04:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.892 /dev/nbd1 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.892 1+0 records in 00:04:34.892 1+0 records out 00:04:34.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236765 s, 17.3 MB/s 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.892 04:41:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.892 04:41:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.151 { 00:04:35.151 "nbd_device": "/dev/nbd0", 00:04:35.151 "bdev_name": "Malloc0" 00:04:35.151 }, 00:04:35.151 { 00:04:35.151 "nbd_device": "/dev/nbd1", 00:04:35.151 "bdev_name": "Malloc1" 00:04:35.151 } 00:04:35.151 ]' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.151 { 00:04:35.151 "nbd_device": "/dev/nbd0", 00:04:35.151 "bdev_name": "Malloc0" 00:04:35.151 }, 00:04:35.151 { 00:04:35.151 "nbd_device": "/dev/nbd1", 00:04:35.151 "bdev_name": "Malloc1" 00:04:35.151 } 00:04:35.151 ]' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.151 /dev/nbd1' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.151 /dev/nbd1' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.151 256+0 records in 00:04:35.151 256+0 records out 00:04:35.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100009 s, 105 MB/s 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.151 256+0 records in 00:04:35.151 256+0 records out 00:04:35.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137915 s, 76.0 MB/s 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.151 256+0 records in 00:04:35.151 256+0 records out 00:04:35.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150148 s, 69.8 MB/s 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.151 04:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.411 04:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.670 04:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.929 04:41:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.929 04:41:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.188 04:41:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.188 [2024-12-10 04:41:27.229998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.188 [2024-12-10 04:41:27.265847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.188 [2024-12-10 04:41:27.265848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.188 [2024-12-10 04:41:27.306148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.188 [2024-12-10 04:41:27.306191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.477 04:41:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.477 04:41:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:39.477 spdk_app_start Round 1 00:04:39.477 04:41:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 439279 /var/tmp/spdk-nbd.sock 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 439279 ']' 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.477 04:41:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:39.477 04:41:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.477 Malloc0 00:04:39.477 04:41:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.736 Malloc1 00:04:39.736 04:41:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.736 04:41:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.995 /dev/nbd0 00:04:39.995 04:41:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.995 04:41:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.995 1+0 records in 00:04:39.995 1+0 records out 00:04:39.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164885 s, 24.8 MB/s 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.995 04:41:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.995 04:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.995 04:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.995 04:41:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.255 /dev/nbd1 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.255 1+0 records in 00:04:40.255 1+0 records out 00:04:40.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201051 s, 20.4 MB/s 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.255 04:41:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.255 04:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.514 { 00:04:40.514 "nbd_device": "/dev/nbd0", 00:04:40.514 "bdev_name": "Malloc0" 00:04:40.514 }, 00:04:40.514 { 00:04:40.514 "nbd_device": "/dev/nbd1", 00:04:40.514 "bdev_name": "Malloc1" 00:04:40.514 } 00:04:40.514 ]' 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.514 { 00:04:40.514 "nbd_device": "/dev/nbd0", 00:04:40.514 "bdev_name": "Malloc0" 00:04:40.514 }, 00:04:40.514 { 00:04:40.514 "nbd_device": "/dev/nbd1", 00:04:40.514 "bdev_name": "Malloc1" 00:04:40.514 } 00:04:40.514 ]' 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.514 /dev/nbd1' 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.514 /dev/nbd1' 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.514 04:41:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.515 256+0 records in 00:04:40.515 256+0 records out 00:04:40.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107197 s, 97.8 MB/s 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.515 256+0 records in 00:04:40.515 256+0 records out 00:04:40.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147229 s, 71.2 MB/s 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.515 256+0 records in 00:04:40.515 256+0 records out 00:04:40.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149921 s, 69.9 MB/s 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.515 04:41:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.774 04:41:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.033 04:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.292 04:41:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.292 04:41:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.551 04:41:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.551 [2024-12-10 04:41:32.571542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.551 [2024-12-10 04:41:32.607245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.551 [2024-12-10 04:41:32.607246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.551 [2024-12-10 04:41:32.648315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.551 [2024-12-10 04:41:32.648355] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.839 04:41:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.839 04:41:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.839 spdk_app_start Round 2 00:04:44.839 04:41:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 439279 /var/tmp/spdk-nbd.sock 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 439279 ']' 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.839 04:41:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:44.839 04:41:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.839 Malloc0 00:04:44.839 04:41:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.098 Malloc1 00:04:45.098 04:41:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.098 04:41:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.357 /dev/nbd0 00:04:45.357 04:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.357 04:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.357 1+0 records in 00:04:45.357 1+0 records out 00:04:45.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190338 s, 21.5 MB/s 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.357 04:41:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.357 04:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.357 04:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.358 04:41:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.616 /dev/nbd1 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.616 1+0 records in 00:04:45.616 1+0 records out 00:04:45.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160151 s, 25.6 MB/s 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.616 04:41:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.616 04:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.876 { 00:04:45.876 "nbd_device": "/dev/nbd0", 00:04:45.876 "bdev_name": "Malloc0" 00:04:45.876 }, 00:04:45.876 { 00:04:45.876 "nbd_device": "/dev/nbd1", 00:04:45.876 "bdev_name": "Malloc1" 00:04:45.876 } 00:04:45.876 ]' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.876 { 00:04:45.876 "nbd_device": "/dev/nbd0", 00:04:45.876 "bdev_name": "Malloc0" 00:04:45.876 }, 00:04:45.876 { 00:04:45.876 "nbd_device": "/dev/nbd1", 00:04:45.876 "bdev_name": "Malloc1" 00:04:45.876 } 00:04:45.876 ]' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.876 /dev/nbd1' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.876 /dev/nbd1' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.876 256+0 records in 00:04:45.876 256+0 records out 00:04:45.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107279 s, 97.7 MB/s 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.876 256+0 records in 00:04:45.876 256+0 records out 00:04:45.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132661 s, 79.0 MB/s 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.876 256+0 records in 00:04:45.876 256+0 records out 00:04:45.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147214 s, 71.2 MB/s 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.876 04:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.135 04:41:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.395 04:41:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.654 04:41:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.654 04:41:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.913 04:41:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.913 [2024-12-10 04:41:37.946579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.913 [2024-12-10 04:41:37.982078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.913 [2024-12-10 04:41:37.982079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.913 [2024-12-10 04:41:38.022007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.913 [2024-12-10 04:41:38.022048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.202 04:41:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 439279 /var/tmp/spdk-nbd.sock 00:04:50.202 04:41:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 439279 ']' 00:04:50.202 04:41:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.202 04:41:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.202 04:41:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.202 04:41:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.202 04:41:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.202 04:41:41 event.app_repeat -- event/event.sh@39 -- # killprocess 439279 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 439279 ']' 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 439279 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439279 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439279' 00:04:50.202 killing process with pid 439279 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 439279 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 439279 00:04:50.202 spdk_app_start is called in Round 0. 00:04:50.202 Shutdown signal received, stop current app iteration 00:04:50.202 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:04:50.202 spdk_app_start is called in Round 1. 00:04:50.202 Shutdown signal received, stop current app iteration 00:04:50.202 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:04:50.202 spdk_app_start is called in Round 2. 00:04:50.202 Shutdown signal received, stop current app iteration 00:04:50.202 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:04:50.202 spdk_app_start is called in Round 3. 00:04:50.202 Shutdown signal received, stop current app iteration 00:04:50.202 04:41:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.202 04:41:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:50.202 00:04:50.202 real 0m16.465s 00:04:50.202 user 0m36.268s 00:04:50.202 sys 0m2.543s 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.202 04:41:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.202 ************************************ 00:04:50.202 END TEST app_repeat 00:04:50.202 ************************************ 00:04:50.202 04:41:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.202 04:41:41 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.202 04:41:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.202 04:41:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.202 04:41:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.202 ************************************ 00:04:50.202 START TEST cpu_locks 00:04:50.202 ************************************ 00:04:50.202 04:41:41 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.462 * Looking for test storage... 00:04:50.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.462 04:41:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.462 --rc genhtml_branch_coverage=1 00:04:50.462 --rc genhtml_function_coverage=1 00:04:50.462 --rc genhtml_legend=1 00:04:50.462 --rc geninfo_all_blocks=1 00:04:50.462 --rc geninfo_unexecuted_blocks=1 00:04:50.462 00:04:50.462 ' 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.462 --rc genhtml_branch_coverage=1 00:04:50.462 --rc genhtml_function_coverage=1 00:04:50.462 --rc genhtml_legend=1 00:04:50.462 --rc geninfo_all_blocks=1 00:04:50.462 --rc geninfo_unexecuted_blocks=1 00:04:50.462 00:04:50.462 ' 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.462 --rc genhtml_branch_coverage=1 00:04:50.462 --rc genhtml_function_coverage=1 00:04:50.462 --rc genhtml_legend=1 00:04:50.462 --rc geninfo_all_blocks=1 00:04:50.462 --rc geninfo_unexecuted_blocks=1 00:04:50.462 00:04:50.462 ' 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.462 --rc genhtml_branch_coverage=1 00:04:50.462 --rc genhtml_function_coverage=1 00:04:50.462 --rc genhtml_legend=1 00:04:50.462 --rc geninfo_all_blocks=1 00:04:50.462 --rc geninfo_unexecuted_blocks=1 00:04:50.462 00:04:50.462 ' 00:04:50.462 04:41:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.462 04:41:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.462 04:41:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.462 04:41:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.462 04:41:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.462 ************************************ 00:04:50.462 START TEST default_locks 00:04:50.462 ************************************ 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=442366 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 442366 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 442366 ']' 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.462 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.462 [2024-12-10 04:41:41.534614] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:50.463 [2024-12-10 04:41:41.534655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442366 ] 00:04:50.722 [2024-12-10 04:41:41.607966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.722 [2024-12-10 04:41:41.645794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.982 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.982 04:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:50.982 04:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 442366 00:04:50.982 04:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 442366 00:04:50.982 04:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.549 lslocks: write error 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 442366 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 442366 ']' 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 442366 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442366 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442366' 00:04:51.549 killing process with pid 442366 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 442366 00:04:51.549 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 442366 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 442366 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 442366 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 442366 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 442366 ']' 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (442366) - No such process 00:04:51.808 ERROR: process (pid: 442366) is no longer running 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.808 00:04:51.808 real 0m1.256s 00:04:51.808 user 0m1.197s 00:04:51.808 sys 0m0.562s 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.808 04:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.808 ************************************ 00:04:51.808 END TEST default_locks 00:04:51.808 ************************************ 00:04:51.808 04:41:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:51.808 04:41:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.808 04:41:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.808 04:41:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.808 ************************************ 00:04:51.808 START TEST default_locks_via_rpc 00:04:51.808 ************************************ 00:04:51.808 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:51.808 04:41:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=442624 00:04:51.808 04:41:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 442624 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 442624 ']' 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.809 04:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.809 [2024-12-10 04:41:42.858983] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:51.809 [2024-12-10 04:41:42.859023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442624 ] 00:04:51.809 [2024-12-10 04:41:42.933563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.068 [2024-12-10 04:41:42.974046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.068 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.326 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.326 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 442624 00:04:52.326 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 442624 00:04:52.326 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 442624 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 442624 ']' 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 442624 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442624 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442624' 00:04:52.585 killing process with pid 442624 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 442624 00:04:52.585 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 442624 00:04:52.844 00:04:52.844 real 0m1.087s 00:04:52.844 user 0m1.041s 00:04:52.844 sys 0m0.494s 00:04:52.844 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.844 04:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.844 ************************************ 00:04:52.844 END TEST default_locks_via_rpc 00:04:52.845 ************************************ 00:04:52.845 04:41:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:52.845 04:41:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.845 04:41:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.845 04:41:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.845 ************************************ 00:04:52.845 START TEST non_locking_app_on_locked_coremask 00:04:52.845 ************************************ 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=442872 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 442872 /var/tmp/spdk.sock 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 442872 ']' 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.845 04:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.104 [2024-12-10 04:41:44.011113] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:53.104 [2024-12-10 04:41:44.011152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442872 ] 00:04:53.104 [2024-12-10 04:41:44.085458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.104 [2024-12-10 04:41:44.125889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=442878 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 442878 /var/tmp/spdk2.sock 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 442878 ']' 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.362 04:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.362 [2024-12-10 04:41:44.385154] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:53.362 [2024-12-10 04:41:44.385208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442878 ] 00:04:53.362 [2024-12-10 04:41:44.476412] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.362 [2024-12-10 04:41:44.476444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.621 [2024-12-10 04:41:44.551978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.189 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.189 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:54.189 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 442872 00:04:54.189 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 442872 00:04:54.189 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.756 lslocks: write error 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 442872 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 442872 ']' 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 442872 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442872 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442872' 00:04:54.756 killing process with pid 442872 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 442872 00:04:54.756 04:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 442872 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 442878 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 442878 ']' 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 442878 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442878 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442878' 00:04:55.323 killing process with pid 442878 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 442878 00:04:55.323 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 442878 00:04:55.582 00:04:55.582 real 0m2.673s 00:04:55.582 user 0m2.825s 00:04:55.582 sys 0m0.880s 00:04:55.582 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.582 04:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.582 ************************************ 00:04:55.582 END TEST non_locking_app_on_locked_coremask 00:04:55.582 ************************************ 00:04:55.582 04:41:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:55.583 04:41:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.583 04:41:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.583 04:41:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.583 ************************************ 00:04:55.583 START TEST locking_app_on_unlocked_coremask 00:04:55.583 ************************************ 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=443354 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 443354 /var/tmp/spdk.sock 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 443354 ']' 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.583 04:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.842 [2024-12-10 04:41:46.750309] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:55.842 [2024-12-10 04:41:46.750348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443354 ] 00:04:55.842 [2024-12-10 04:41:46.823372] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.842 [2024-12-10 04:41:46.823398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.842 [2024-12-10 04:41:46.863896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=443363 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 443363 /var/tmp/spdk2.sock 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 443363 ']' 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.101 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.101 [2024-12-10 04:41:47.124512] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:56.101 [2024-12-10 04:41:47.124561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443363 ] 00:04:56.101 [2024-12-10 04:41:47.208091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.360 [2024-12-10 04:41:47.286986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.929 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.929 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.929 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 443363 00:04:56.929 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 443363 00:04:56.929 04:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.497 lslocks: write error 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 443354 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 443354 ']' 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 443354 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443354 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443354' 00:04:57.497 killing process with pid 443354 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 443354 00:04:57.497 04:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 443354 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 443363 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 443363 ']' 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 443363 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443363 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443363' 00:04:58.066 killing process with pid 443363 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 443363 00:04:58.066 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 443363 00:04:58.325 00:04:58.325 real 0m2.723s 00:04:58.325 user 0m2.880s 00:04:58.325 sys 0m0.894s 00:04:58.325 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.325 04:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.325 ************************************ 00:04:58.325 END TEST locking_app_on_unlocked_coremask 00:04:58.325 ************************************ 00:04:58.584 04:41:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:58.584 04:41:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.584 04:41:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.584 04:41:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.584 ************************************ 00:04:58.584 START TEST locking_app_on_locked_coremask 00:04:58.584 ************************************ 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=443843 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 443843 /var/tmp/spdk.sock 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 443843 ']' 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.584 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.584 [2024-12-10 04:41:49.547340] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:58.584 [2024-12-10 04:41:49.547383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443843 ] 00:04:58.584 [2024-12-10 04:41:49.621539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.584 [2024-12-10 04:41:49.661646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=443850 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 443850 /var/tmp/spdk2.sock 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 443850 /var/tmp/spdk2.sock 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 443850 /var/tmp/spdk2.sock 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 443850 ']' 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.844 04:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.844 [2024-12-10 04:41:49.924920] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:04:58.844 [2024-12-10 04:41:49.924964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid443850 ] 00:04:59.103 [2024-12-10 04:41:50.018266] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 443843 has claimed it. 00:04:59.103 [2024-12-10 04:41:50.018316] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (443850) - No such process 00:04:59.671 ERROR: process (pid: 443850) is no longer running 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 443843 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 443843 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.671 lslocks: write error 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 443843 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 443843 ']' 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 443843 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.671 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443843 00:04:59.930 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.930 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.930 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443843' 00:04:59.930 killing process with pid 443843 00:04:59.930 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 443843 00:04:59.930 04:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 443843 00:05:00.189 00:05:00.189 real 0m1.629s 00:05:00.189 user 0m1.752s 00:05:00.189 sys 0m0.535s 00:05:00.189 04:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.189 04:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.189 ************************************ 00:05:00.189 END TEST locking_app_on_locked_coremask 00:05:00.189 ************************************ 00:05:00.189 04:41:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:00.189 04:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.189 04:41:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.189 04:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.189 ************************************ 00:05:00.189 START TEST locking_overlapped_coremask 00:05:00.189 ************************************ 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=444108 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 444108 /var/tmp/spdk.sock 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 444108 ']' 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.189 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.189 [2024-12-10 04:41:51.244150] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:00.189 [2024-12-10 04:41:51.244197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444108 ] 00:05:00.189 [2024-12-10 04:41:51.317459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.449 [2024-12-10 04:41:51.360228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.449 [2024-12-10 04:41:51.360330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.449 [2024-12-10 04:41:51.360330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=444115 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 444115 /var/tmp/spdk2.sock 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 444115 /var/tmp/spdk2.sock 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 444115 /var/tmp/spdk2.sock 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 444115 ']' 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.449 04:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.708 [2024-12-10 04:41:51.627927] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:00.708 [2024-12-10 04:41:51.627971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444115 ] 00:05:00.708 [2024-12-10 04:41:51.720774] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 444108 has claimed it. 00:05:00.708 [2024-12-10 04:41:51.720812] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (444115) - No such process 00:05:01.276 ERROR: process (pid: 444115) is no longer running 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 444108 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 444108 ']' 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 444108 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444108 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444108' 00:05:01.276 killing process with pid 444108 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 444108 00:05:01.276 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 444108 00:05:01.535 00:05:01.535 real 0m1.430s 00:05:01.535 user 0m3.974s 00:05:01.535 sys 0m0.377s 00:05:01.535 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.535 04:41:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.535 ************************************ 00:05:01.535 END TEST locking_overlapped_coremask 00:05:01.535 ************************************ 00:05:01.535 04:41:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:01.535 04:41:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.535 04:41:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.535 04:41:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.796 ************************************ 00:05:01.796 START TEST locking_overlapped_coremask_via_rpc 00:05:01.796 ************************************ 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=444366 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 444366 /var/tmp/spdk.sock 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 444366 ']' 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.796 04:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.796 [2024-12-10 04:41:52.743794] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:01.796 [2024-12-10 04:41:52.743835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444366 ] 00:05:01.796 [2024-12-10 04:41:52.818052] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.796 [2024-12-10 04:41:52.818081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.796 [2024-12-10 04:41:52.856417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.796 [2024-12-10 04:41:52.856526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.796 [2024-12-10 04:41:52.856527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=444446 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 444446 /var/tmp/spdk2.sock 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 444446 ']' 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.055 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.055 [2024-12-10 04:41:53.131449] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:02.055 [2024-12-10 04:41:53.131504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444446 ] 00:05:02.315 [2024-12-10 04:41:53.224272] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.315 [2024-12-10 04:41:53.224305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.315 [2024-12-10 04:41:53.310578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.315 [2024-12-10 04:41:53.310697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.315 [2024-12-10 04:41:53.310697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.883 04:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.883 [2024-12-10 04:41:53.995236] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 444366 has claimed it. 00:05:02.883 request: 00:05:02.883 { 00:05:02.883 "method": "framework_enable_cpumask_locks", 00:05:02.883 "req_id": 1 00:05:02.883 } 00:05:02.883 Got JSON-RPC error response 00:05:02.883 response: 00:05:02.883 { 00:05:02.883 "code": -32603, 00:05:02.883 "message": "Failed to claim CPU core: 2" 00:05:02.883 } 00:05:02.883 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 444366 /var/tmp/spdk.sock 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 444366 ']' 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.884 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 444446 /var/tmp/spdk2.sock 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 444446 ']' 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.143 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.402 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:03.403 00:05:03.403 real 0m1.720s 00:05:03.403 user 0m0.816s 00:05:03.403 sys 0m0.153s 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.403 04:41:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.403 ************************************ 00:05:03.403 END TEST locking_overlapped_coremask_via_rpc 00:05:03.403 ************************************ 00:05:03.403 04:41:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:03.403 04:41:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 444366 ]] 00:05:03.403 04:41:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 444366 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 444366 ']' 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 444366 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444366 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444366' 00:05:03.403 killing process with pid 444366 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 444366 00:05:03.403 04:41:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 444366 00:05:03.971 04:41:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 444446 ]] 00:05:03.971 04:41:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 444446 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 444446 ']' 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 444446 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444446 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444446' 00:05:03.971 killing process with pid 444446 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 444446 00:05:03.971 04:41:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 444446 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 444366 ]] 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 444366 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 444366 ']' 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 444366 00:05:04.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (444366) - No such process 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 444366 is not found' 00:05:04.230 Process with pid 444366 is not found 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 444446 ]] 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 444446 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 444446 ']' 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 444446 00:05:04.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (444446) - No such process 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 444446 is not found' 00:05:04.230 Process with pid 444446 is not found 00:05:04.230 04:41:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.230 00:05:04.230 real 0m13.907s 00:05:04.230 user 0m24.309s 00:05:04.230 sys 0m4.851s 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.230 04:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.230 ************************************ 00:05:04.230 END TEST cpu_locks 00:05:04.230 ************************************ 00:05:04.230 00:05:04.230 real 0m38.686s 00:05:04.230 user 1m13.796s 00:05:04.230 sys 0m8.382s 00:05:04.230 04:41:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.230 04:41:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.230 ************************************ 00:05:04.230 END TEST event 00:05:04.230 ************************************ 00:05:04.230 04:41:55 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:04.230 04:41:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.230 04:41:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.230 04:41:55 -- common/autotest_common.sh@10 -- # set +x 00:05:04.230 ************************************ 00:05:04.230 START TEST thread 00:05:04.230 ************************************ 00:05:04.230 04:41:55 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:04.230 * Looking for test storage... 00:05:04.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:04.489 04:41:55 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.489 04:41:55 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.489 04:41:55 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.489 04:41:55 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.489 04:41:55 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.489 04:41:55 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.489 04:41:55 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.489 04:41:55 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.489 04:41:55 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.489 04:41:55 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.489 04:41:55 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.489 04:41:55 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.489 04:41:55 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.489 04:41:55 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.489 04:41:55 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.489 04:41:55 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:04.489 04:41:55 thread -- scripts/common.sh@345 -- # : 1 00:05:04.489 04:41:55 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.489 04:41:55 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.489 04:41:55 thread -- scripts/common.sh@365 -- # decimal 1 00:05:04.489 04:41:55 thread -- scripts/common.sh@353 -- # local d=1 00:05:04.489 04:41:55 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.489 04:41:55 thread -- scripts/common.sh@355 -- # echo 1 00:05:04.489 04:41:55 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.489 04:41:55 thread -- scripts/common.sh@366 -- # decimal 2 00:05:04.489 04:41:55 thread -- scripts/common.sh@353 -- # local d=2 00:05:04.489 04:41:55 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.489 04:41:55 thread -- scripts/common.sh@355 -- # echo 2 00:05:04.489 04:41:55 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.489 04:41:55 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.489 04:41:55 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.489 04:41:55 thread -- scripts/common.sh@368 -- # return 0 00:05:04.489 04:41:55 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.489 04:41:55 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.489 --rc genhtml_branch_coverage=1 00:05:04.489 --rc genhtml_function_coverage=1 00:05:04.489 --rc genhtml_legend=1 00:05:04.489 --rc geninfo_all_blocks=1 00:05:04.489 --rc geninfo_unexecuted_blocks=1 00:05:04.489 00:05:04.490 ' 00:05:04.490 04:41:55 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.490 --rc genhtml_branch_coverage=1 00:05:04.490 --rc genhtml_function_coverage=1 00:05:04.490 --rc genhtml_legend=1 00:05:04.490 --rc geninfo_all_blocks=1 00:05:04.490 --rc geninfo_unexecuted_blocks=1 00:05:04.490 00:05:04.490 ' 00:05:04.490 04:41:55 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.490 --rc genhtml_branch_coverage=1 00:05:04.490 --rc genhtml_function_coverage=1 00:05:04.490 --rc genhtml_legend=1 00:05:04.490 --rc geninfo_all_blocks=1 00:05:04.490 --rc geninfo_unexecuted_blocks=1 00:05:04.490 00:05:04.490 ' 00:05:04.490 04:41:55 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.490 --rc genhtml_branch_coverage=1 00:05:04.490 --rc genhtml_function_coverage=1 00:05:04.490 --rc genhtml_legend=1 00:05:04.490 --rc geninfo_all_blocks=1 00:05:04.490 --rc geninfo_unexecuted_blocks=1 00:05:04.490 00:05:04.490 ' 00:05:04.490 04:41:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.490 04:41:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:04.490 04:41:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.490 04:41:55 thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.490 ************************************ 00:05:04.490 START TEST thread_poller_perf 00:05:04.490 ************************************ 00:05:04.490 04:41:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.490 [2024-12-10 04:41:55.507313] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:04.490 [2024-12-10 04:41:55.507375] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444929 ] 00:05:04.490 [2024-12-10 04:41:55.584128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.749 [2024-12-10 04:41:55.622661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.749 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:05.686 [2024-12-10T03:41:56.823Z] ====================================== 00:05:05.686 [2024-12-10T03:41:56.823Z] busy:2105633208 (cyc) 00:05:05.686 [2024-12-10T03:41:56.823Z] total_run_count: 422000 00:05:05.686 [2024-12-10T03:41:56.823Z] tsc_hz: 2100000000 (cyc) 00:05:05.686 [2024-12-10T03:41:56.823Z] ====================================== 00:05:05.686 [2024-12-10T03:41:56.823Z] poller_cost: 4989 (cyc), 2375 (nsec) 00:05:05.686 00:05:05.686 real 0m1.172s 00:05:05.686 user 0m1.098s 00:05:05.686 sys 0m0.070s 00:05:05.686 04:41:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.686 04:41:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.686 ************************************ 00:05:05.686 END TEST thread_poller_perf 00:05:05.686 ************************************ 00:05:05.686 04:41:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.686 04:41:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:05.686 04:41:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.686 04:41:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.686 ************************************ 00:05:05.686 START TEST thread_poller_perf 00:05:05.686 ************************************ 00:05:05.686 04:41:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.686 [2024-12-10 04:41:56.749907] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:05.686 [2024-12-10 04:41:56.749977] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445170 ] 00:05:05.945 [2024-12-10 04:41:56.827153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.946 [2024-12-10 04:41:56.865909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.946 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:06.883 [2024-12-10T03:41:58.020Z] ====================================== 00:05:06.883 [2024-12-10T03:41:58.020Z] busy:2101364432 (cyc) 00:05:06.883 [2024-12-10T03:41:58.020Z] total_run_count: 5188000 00:05:06.883 [2024-12-10T03:41:58.020Z] tsc_hz: 2100000000 (cyc) 00:05:06.883 [2024-12-10T03:41:58.020Z] ====================================== 00:05:06.883 [2024-12-10T03:41:58.020Z] poller_cost: 405 (cyc), 192 (nsec) 00:05:06.883 00:05:06.883 real 0m1.173s 00:05:06.883 user 0m1.099s 00:05:06.883 sys 0m0.069s 00:05:06.883 04:41:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.883 04:41:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.883 ************************************ 00:05:06.883 END TEST thread_poller_perf 00:05:06.883 ************************************ 00:05:06.883 04:41:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:06.883 00:05:06.883 real 0m2.651s 00:05:06.883 user 0m2.344s 00:05:06.883 sys 0m0.321s 00:05:06.883 04:41:57 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.883 04:41:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.883 ************************************ 00:05:06.883 END TEST thread 00:05:06.883 ************************************ 00:05:06.883 04:41:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:06.883 04:41:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:06.883 04:41:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.883 04:41:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.883 04:41:57 -- common/autotest_common.sh@10 -- # set +x 00:05:06.883 ************************************ 00:05:06.883 START TEST app_cmdline 00:05:06.883 ************************************ 00:05:06.883 04:41:58 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:07.143 * Looking for test storage... 00:05:07.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.143 04:41:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.143 --rc genhtml_branch_coverage=1 00:05:07.143 --rc genhtml_function_coverage=1 00:05:07.143 --rc genhtml_legend=1 00:05:07.143 --rc geninfo_all_blocks=1 00:05:07.143 --rc geninfo_unexecuted_blocks=1 00:05:07.143 00:05:07.143 ' 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.143 --rc genhtml_branch_coverage=1 00:05:07.143 --rc genhtml_function_coverage=1 00:05:07.143 --rc genhtml_legend=1 00:05:07.143 --rc geninfo_all_blocks=1 00:05:07.143 --rc geninfo_unexecuted_blocks=1 00:05:07.143 00:05:07.143 ' 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.143 --rc genhtml_branch_coverage=1 00:05:07.143 --rc genhtml_function_coverage=1 00:05:07.143 --rc genhtml_legend=1 00:05:07.143 --rc geninfo_all_blocks=1 00:05:07.143 --rc geninfo_unexecuted_blocks=1 00:05:07.143 00:05:07.143 ' 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.143 --rc genhtml_branch_coverage=1 00:05:07.143 --rc genhtml_function_coverage=1 00:05:07.143 --rc genhtml_legend=1 00:05:07.143 --rc geninfo_all_blocks=1 00:05:07.143 --rc geninfo_unexecuted_blocks=1 00:05:07.143 00:05:07.143 ' 00:05:07.143 04:41:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:07.143 04:41:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=445469 00:05:07.143 04:41:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:07.143 04:41:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 445469 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 445469 ']' 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.143 04:41:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.143 [2024-12-10 04:41:58.225123] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:07.143 [2024-12-10 04:41:58.225173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445469 ] 00:05:07.403 [2024-12-10 04:41:58.298474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.403 [2024-12-10 04:41:58.336503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.662 04:41:58 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.662 04:41:58 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:07.662 { 00:05:07.662 "version": "SPDK v25.01-pre git sha1 86d35c37a", 00:05:07.662 "fields": { 00:05:07.662 "major": 25, 00:05:07.662 "minor": 1, 00:05:07.662 "patch": 0, 00:05:07.662 "suffix": "-pre", 00:05:07.662 "commit": "86d35c37a" 00:05:07.662 } 00:05:07.662 } 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:07.662 04:41:58 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.662 04:41:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:07.662 04:41:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.662 04:41:58 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.921 04:41:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:07.921 04:41:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:07.921 04:41:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:07.921 04:41:58 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:07.921 request: 00:05:07.921 { 00:05:07.921 "method": "env_dpdk_get_mem_stats", 00:05:07.921 "req_id": 1 00:05:07.922 } 00:05:07.922 Got JSON-RPC error response 00:05:07.922 response: 00:05:07.922 { 00:05:07.922 "code": -32601, 00:05:07.922 "message": "Method not found" 00:05:07.922 } 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.922 04:41:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 445469 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 445469 ']' 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 445469 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.922 04:41:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445469 00:05:08.181 04:41:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.181 04:41:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.181 04:41:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445469' 00:05:08.181 killing process with pid 445469 00:05:08.181 04:41:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 445469 00:05:08.181 04:41:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 445469 00:05:08.440 00:05:08.440 real 0m1.348s 00:05:08.440 user 0m1.568s 00:05:08.440 sys 0m0.455s 00:05:08.440 04:41:59 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.440 04:41:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.440 ************************************ 00:05:08.440 END TEST app_cmdline 00:05:08.440 ************************************ 00:05:08.440 04:41:59 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.440 04:41:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.440 04:41:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.440 04:41:59 -- common/autotest_common.sh@10 -- # set +x 00:05:08.440 ************************************ 00:05:08.440 START TEST version 00:05:08.440 ************************************ 00:05:08.440 04:41:59 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.440 * Looking for test storage... 00:05:08.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:08.440 04:41:59 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.440 04:41:59 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.440 04:41:59 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.700 04:41:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.700 04:41:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.700 04:41:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.700 04:41:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.700 04:41:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.700 04:41:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.700 04:41:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.700 04:41:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.700 04:41:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.700 04:41:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.700 04:41:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.700 04:41:59 version -- scripts/common.sh@344 -- # case "$op" in 00:05:08.700 04:41:59 version -- scripts/common.sh@345 -- # : 1 00:05:08.700 04:41:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.700 04:41:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.700 04:41:59 version -- scripts/common.sh@365 -- # decimal 1 00:05:08.700 04:41:59 version -- scripts/common.sh@353 -- # local d=1 00:05:08.700 04:41:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.700 04:41:59 version -- scripts/common.sh@355 -- # echo 1 00:05:08.700 04:41:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.700 04:41:59 version -- scripts/common.sh@366 -- # decimal 2 00:05:08.700 04:41:59 version -- scripts/common.sh@353 -- # local d=2 00:05:08.700 04:41:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.700 04:41:59 version -- scripts/common.sh@355 -- # echo 2 00:05:08.700 04:41:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.700 04:41:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.700 04:41:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.700 04:41:59 version -- scripts/common.sh@368 -- # return 0 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.700 --rc genhtml_branch_coverage=1 00:05:08.700 --rc genhtml_function_coverage=1 00:05:08.700 --rc genhtml_legend=1 00:05:08.700 --rc geninfo_all_blocks=1 00:05:08.700 --rc geninfo_unexecuted_blocks=1 00:05:08.700 00:05:08.700 ' 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.700 --rc genhtml_branch_coverage=1 00:05:08.700 --rc genhtml_function_coverage=1 00:05:08.700 --rc genhtml_legend=1 00:05:08.700 --rc geninfo_all_blocks=1 00:05:08.700 --rc geninfo_unexecuted_blocks=1 00:05:08.700 00:05:08.700 ' 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.700 --rc genhtml_branch_coverage=1 00:05:08.700 --rc genhtml_function_coverage=1 00:05:08.700 --rc genhtml_legend=1 00:05:08.700 --rc geninfo_all_blocks=1 00:05:08.700 --rc geninfo_unexecuted_blocks=1 00:05:08.700 00:05:08.700 ' 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.700 --rc genhtml_branch_coverage=1 00:05:08.700 --rc genhtml_function_coverage=1 00:05:08.700 --rc genhtml_legend=1 00:05:08.700 --rc geninfo_all_blocks=1 00:05:08.700 --rc geninfo_unexecuted_blocks=1 00:05:08.700 00:05:08.700 ' 00:05:08.700 04:41:59 version -- app/version.sh@17 -- # get_header_version major 00:05:08.700 04:41:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # cut -f2 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.700 04:41:59 version -- app/version.sh@17 -- # major=25 00:05:08.700 04:41:59 version -- app/version.sh@18 -- # get_header_version minor 00:05:08.700 04:41:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # cut -f2 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.700 04:41:59 version -- app/version.sh@18 -- # minor=1 00:05:08.700 04:41:59 version -- app/version.sh@19 -- # get_header_version patch 00:05:08.700 04:41:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # cut -f2 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.700 04:41:59 version -- app/version.sh@19 -- # patch=0 00:05:08.700 04:41:59 version -- app/version.sh@20 -- # get_header_version suffix 00:05:08.700 04:41:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # cut -f2 00:05:08.700 04:41:59 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.700 04:41:59 version -- app/version.sh@20 -- # suffix=-pre 00:05:08.700 04:41:59 version -- app/version.sh@22 -- # version=25.1 00:05:08.700 04:41:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:08.700 04:41:59 version -- app/version.sh@28 -- # version=25.1rc0 00:05:08.700 04:41:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:08.700 04:41:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:08.700 04:41:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:08.700 04:41:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:08.700 00:05:08.700 real 0m0.244s 00:05:08.700 user 0m0.162s 00:05:08.700 sys 0m0.124s 00:05:08.700 04:41:59 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.700 04:41:59 version -- common/autotest_common.sh@10 -- # set +x 00:05:08.700 ************************************ 00:05:08.700 END TEST version 00:05:08.700 ************************************ 00:05:08.700 04:41:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:08.700 04:41:59 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:08.700 04:41:59 -- spdk/autotest.sh@194 -- # uname -s 00:05:08.700 04:41:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:08.700 04:41:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:08.700 04:41:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:08.700 04:41:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:08.701 04:41:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.701 04:41:59 -- common/autotest_common.sh@10 -- # set +x 00:05:08.701 04:41:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:08.701 04:41:59 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:08.701 04:41:59 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:08.701 04:41:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:08.701 04:41:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.701 04:41:59 -- common/autotest_common.sh@10 -- # set +x 00:05:08.701 ************************************ 00:05:08.701 START TEST nvmf_tcp 00:05:08.701 ************************************ 00:05:08.701 04:41:59 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:08.960 * Looking for test storage... 00:05:08.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.960 04:41:59 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.960 --rc genhtml_branch_coverage=1 00:05:08.960 --rc genhtml_function_coverage=1 00:05:08.960 --rc genhtml_legend=1 00:05:08.960 --rc geninfo_all_blocks=1 00:05:08.960 --rc geninfo_unexecuted_blocks=1 00:05:08.960 00:05:08.960 ' 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.960 --rc genhtml_branch_coverage=1 00:05:08.960 --rc genhtml_function_coverage=1 00:05:08.960 --rc genhtml_legend=1 00:05:08.960 --rc geninfo_all_blocks=1 00:05:08.960 --rc geninfo_unexecuted_blocks=1 00:05:08.960 00:05:08.960 ' 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.960 --rc genhtml_branch_coverage=1 00:05:08.960 --rc genhtml_function_coverage=1 00:05:08.960 --rc genhtml_legend=1 00:05:08.960 --rc geninfo_all_blocks=1 00:05:08.960 --rc geninfo_unexecuted_blocks=1 00:05:08.960 00:05:08.960 ' 00:05:08.960 04:41:59 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.961 --rc genhtml_branch_coverage=1 00:05:08.961 --rc genhtml_function_coverage=1 00:05:08.961 --rc genhtml_legend=1 00:05:08.961 --rc geninfo_all_blocks=1 00:05:08.961 --rc geninfo_unexecuted_blocks=1 00:05:08.961 00:05:08.961 ' 00:05:08.961 04:41:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:08.961 04:41:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:08.961 04:41:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:08.961 04:41:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:08.961 04:41:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.961 04:41:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.961 ************************************ 00:05:08.961 START TEST nvmf_target_core 00:05:08.961 ************************************ 00:05:08.961 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:08.961 * Looking for test storage... 00:05:09.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.220 --rc genhtml_branch_coverage=1 00:05:09.220 --rc genhtml_function_coverage=1 00:05:09.220 --rc genhtml_legend=1 00:05:09.220 --rc geninfo_all_blocks=1 00:05:09.220 --rc geninfo_unexecuted_blocks=1 00:05:09.220 00:05:09.220 ' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.220 --rc genhtml_branch_coverage=1 00:05:09.220 --rc genhtml_function_coverage=1 00:05:09.220 --rc genhtml_legend=1 00:05:09.220 --rc geninfo_all_blocks=1 00:05:09.220 --rc geninfo_unexecuted_blocks=1 00:05:09.220 00:05:09.220 ' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.220 --rc genhtml_branch_coverage=1 00:05:09.220 --rc genhtml_function_coverage=1 00:05:09.220 --rc genhtml_legend=1 00:05:09.220 --rc geninfo_all_blocks=1 00:05:09.220 --rc geninfo_unexecuted_blocks=1 00:05:09.220 00:05:09.220 ' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.220 --rc genhtml_branch_coverage=1 00:05:09.220 --rc genhtml_function_coverage=1 00:05:09.220 --rc genhtml_legend=1 00:05:09.220 --rc geninfo_all_blocks=1 00:05:09.220 --rc geninfo_unexecuted_blocks=1 00:05:09.220 00:05:09.220 ' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.220 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 ************************************ 00:05:09.221 START TEST nvmf_abort 00:05:09.221 ************************************ 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.221 * Looking for test storage... 00:05:09.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.221 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.481 --rc genhtml_branch_coverage=1 00:05:09.481 --rc genhtml_function_coverage=1 00:05:09.481 --rc genhtml_legend=1 00:05:09.481 --rc geninfo_all_blocks=1 00:05:09.481 --rc geninfo_unexecuted_blocks=1 00:05:09.481 00:05:09.481 ' 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.481 --rc genhtml_branch_coverage=1 00:05:09.481 --rc genhtml_function_coverage=1 00:05:09.481 --rc genhtml_legend=1 00:05:09.481 --rc geninfo_all_blocks=1 00:05:09.481 --rc geninfo_unexecuted_blocks=1 00:05:09.481 00:05:09.481 ' 00:05:09.481 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.481 --rc genhtml_branch_coverage=1 00:05:09.481 --rc genhtml_function_coverage=1 00:05:09.481 --rc genhtml_legend=1 00:05:09.482 --rc geninfo_all_blocks=1 00:05:09.482 --rc geninfo_unexecuted_blocks=1 00:05:09.482 00:05:09.482 ' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.482 --rc genhtml_branch_coverage=1 00:05:09.482 --rc genhtml_function_coverage=1 00:05:09.482 --rc genhtml_legend=1 00:05:09.482 --rc geninfo_all_blocks=1 00:05:09.482 --rc geninfo_unexecuted_blocks=1 00:05:09.482 00:05:09.482 ' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:09.482 04:42:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:16.054 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:16.054 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:16.054 Found net devices under 0000:af:00.0: cvl_0_0 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:16.054 Found net devices under 0000:af:00.1: cvl_0_1 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.054 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:05:16.055 00:05:16.055 --- 10.0.0.2 ping statistics --- 00:05:16.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.055 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:05:16.055 00:05:16.055 --- 10.0.0.1 ping statistics --- 00:05:16.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.055 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=449354 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 449354 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 449354 ']' 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 [2024-12-10 04:42:06.460624] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:16.055 [2024-12-10 04:42:06.460673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.055 [2024-12-10 04:42:06.538937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.055 [2024-12-10 04:42:06.580746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.055 [2024-12-10 04:42:06.580779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.055 [2024-12-10 04:42:06.580787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.055 [2024-12-10 04:42:06.580794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.055 [2024-12-10 04:42:06.580799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.055 [2024-12-10 04:42:06.582036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.055 [2024-12-10 04:42:06.582143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.055 [2024-12-10 04:42:06.582144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 [2024-12-10 04:42:06.718790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 Malloc0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 Delay0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 [2024-12-10 04:42:06.790647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.055 04:42:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:16.055 [2024-12-10 04:42:06.927898] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:17.960 [2024-12-10 04:42:08.995539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1c6d0 is same with the state(6) to be set 00:05:17.960 Initializing NVMe Controllers 00:05:17.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:17.960 controller IO queue size 128 less than required 00:05:17.960 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:17.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:17.960 Initialization complete. Launching workers. 00:05:17.960 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37452 00:05:17.960 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37513, failed to submit 62 00:05:17.960 success 37456, unsuccessful 57, failed 0 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:17.960 rmmod nvme_tcp 00:05:17.960 rmmod nvme_fabrics 00:05:17.960 rmmod nvme_keyring 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 449354 ']' 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 449354 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 449354 ']' 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 449354 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:17.960 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449354 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449354' 00:05:18.219 killing process with pid 449354 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 449354 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 449354 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.219 04:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:20.758 00:05:20.758 real 0m11.144s 00:05:20.758 user 0m11.495s 00:05:20.758 sys 0m5.501s 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 ************************************ 00:05:20.758 END TEST nvmf_abort 00:05:20.758 ************************************ 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 ************************************ 00:05:20.758 START TEST nvmf_ns_hotplug_stress 00:05:20.758 ************************************ 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:20.758 * Looking for test storage... 00:05:20.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.758 --rc genhtml_branch_coverage=1 00:05:20.758 --rc genhtml_function_coverage=1 00:05:20.758 --rc genhtml_legend=1 00:05:20.758 --rc geninfo_all_blocks=1 00:05:20.758 --rc geninfo_unexecuted_blocks=1 00:05:20.758 00:05:20.758 ' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.758 --rc genhtml_branch_coverage=1 00:05:20.758 --rc genhtml_function_coverage=1 00:05:20.758 --rc genhtml_legend=1 00:05:20.758 --rc geninfo_all_blocks=1 00:05:20.758 --rc geninfo_unexecuted_blocks=1 00:05:20.758 00:05:20.758 ' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.758 --rc genhtml_branch_coverage=1 00:05:20.758 --rc genhtml_function_coverage=1 00:05:20.758 --rc genhtml_legend=1 00:05:20.758 --rc geninfo_all_blocks=1 00:05:20.758 --rc geninfo_unexecuted_blocks=1 00:05:20.758 00:05:20.758 ' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.758 --rc genhtml_branch_coverage=1 00:05:20.758 --rc genhtml_function_coverage=1 00:05:20.758 --rc genhtml_legend=1 00:05:20.758 --rc geninfo_all_blocks=1 00:05:20.758 --rc geninfo_unexecuted_blocks=1 00:05:20.758 00:05:20.758 ' 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.758 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:20.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:20.759 04:42:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:27.331 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:27.331 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:27.331 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:27.331 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:27.331 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:27.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:27.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:27.332 Found net devices under 0000:af:00.0: cvl_0_0 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:27.332 Found net devices under 0000:af:00.1: cvl_0_1 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:27.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:27.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:05:27.332 00:05:27.332 --- 10.0.0.2 ping statistics --- 00:05:27.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.332 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:05:27.332 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:27.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:27.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:05:27.332 00:05:27.332 --- 10.0.0.1 ping statistics --- 00:05:27.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.332 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=453604 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 453604 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 453604 ']' 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:27.333 [2024-12-10 04:42:17.710375] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:27.333 [2024-12-10 04:42:17.710424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:27.333 [2024-12-10 04:42:17.789119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.333 [2024-12-10 04:42:17.830356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:27.333 [2024-12-10 04:42:17.830390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:27.333 [2024-12-10 04:42:17.830398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:27.333 [2024-12-10 04:42:17.830405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:27.333 [2024-12-10 04:42:17.830410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:27.333 [2024-12-10 04:42:17.831726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.333 [2024-12-10 04:42:17.831834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.333 [2024-12-10 04:42:17.831835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:27.333 04:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:27.333 [2024-12-10 04:42:18.141207] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.333 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:27.333 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:27.593 [2024-12-10 04:42:18.526622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:27.593 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:27.851 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:27.851 Malloc0 00:05:27.851 04:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:28.110 Delay0 00:05:28.110 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.369 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:28.628 NULL1 00:05:28.628 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:28.628 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=454023 00:05:28.628 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:28.628 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:28.892 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.892 Read completed with error (sct=0, sc=11) 00:05:28.892 04:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.152 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:29.152 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:29.410 true 00:05:29.410 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:29.410 04:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.425 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.425 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:30.425 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:30.709 true 00:05:30.709 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:30.709 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.709 04:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.968 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:30.968 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:31.226 true 00:05:31.226 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:31.226 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.486 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.486 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:31.486 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:31.745 true 00:05:31.745 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:31.745 04:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.681 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.682 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:32.682 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:32.940 true 00:05:32.940 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:32.940 04:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.199 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.458 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:33.458 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:33.458 true 00:05:33.458 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:33.458 04:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.836 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:34.836 04:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:35.095 true 00:05:35.095 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:35.095 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.032 04:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.291 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:36.291 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:36.291 true 00:05:36.291 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:36.291 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.550 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.809 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:36.809 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:36.809 true 00:05:37.068 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:37.068 04:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.005 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.266 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:38.266 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:38.524 true 00:05:38.525 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:38.525 04:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.462 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.462 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:39.462 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:39.721 true 00:05:39.721 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:39.721 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.980 04:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.980 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:39.980 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:40.240 true 00:05:40.240 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:40.240 04:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.619 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.619 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:41.619 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:41.878 true 00:05:41.878 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:41.878 04:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.815 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.815 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:42.815 04:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:43.074 true 00:05:43.074 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:43.074 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.332 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.332 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:43.332 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:43.591 true 00:05:43.592 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:43.592 04:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.970 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.970 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:44.970 04:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:44.970 true 00:05:45.229 04:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:45.229 04:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.797 04:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.055 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:46.055 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:46.314 true 00:05:46.314 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:46.314 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.573 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.832 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:46.832 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:46.832 true 00:05:46.832 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:46.832 04:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.209 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:48.209 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:48.468 true 00:05:48.468 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:48.468 04:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.405 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.405 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:49.405 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:49.664 true 00:05:49.664 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:49.664 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.923 04:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.182 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:50.182 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:50.182 true 00:05:50.182 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:50.182 04:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.559 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.560 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:51.560 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:51.819 true 00:05:51.819 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:51.819 04:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.756 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.756 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:52.756 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:53.015 true 00:05:53.015 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:53.015 04:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.015 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.274 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:53.274 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:53.533 true 00:05:53.533 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:53.533 04:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.473 04:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.732 04:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:54.732 04:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:55.009 true 00:05:55.009 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:55.009 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.947 04:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.947 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:55.947 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:56.207 true 00:05:56.207 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:56.207 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.465 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.723 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:56.723 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:56.723 true 00:05:56.723 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:56.723 04:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.103 04:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.103 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:58.103 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:58.362 true 00:05:58.362 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:58.362 04:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.299 Initializing NVMe Controllers 00:05:59.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:59.299 Controller IO queue size 128, less than required. 00:05:59.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.299 Controller IO queue size 128, less than required. 00:05:59.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:59.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:59.299 Initialization complete. Launching workers. 00:05:59.299 ======================================================== 00:05:59.299 Latency(us) 00:05:59.299 Device Information : IOPS MiB/s Average min max 00:05:59.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2340.01 1.14 38195.19 1925.05 1076631.22 00:05:59.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18086.32 8.83 7077.01 1298.14 370504.88 00:05:59.299 ======================================================== 00:05:59.299 Total : 20426.33 9.97 10641.87 1298.14 1076631.22 00:05:59.299 00:05:59.299 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.299 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:59.299 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:59.558 true 00:05:59.558 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 454023 00:05:59.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (454023) - No such process 00:05:59.558 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 454023 00:05:59.558 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.817 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.817 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:59.817 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:59.817 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:59.817 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.817 04:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:00.076 null0 00:06:00.076 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.076 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.076 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:00.334 null1 00:06:00.334 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.334 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.334 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:00.592 null2 00:06:00.592 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.592 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.592 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:00.850 null3 00:06:00.850 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.850 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.850 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:00.850 null4 00:06:00.850 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.850 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.850 04:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:01.108 null5 00:06:01.108 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.108 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.108 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:01.366 null6 00:06:01.366 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.366 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.366 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:01.624 null7 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:01.624 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 459686 459688 459691 459695 459698 459701 459703 459706 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.625 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.883 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.883 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.883 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.884 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.142 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.142 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.142 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.143 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.401 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.402 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.660 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.919 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.920 04:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.920 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.920 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.920 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.920 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.179 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.438 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.697 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.957 04:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.957 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.216 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.474 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.734 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.992 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.993 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.993 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.993 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.993 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.993 04:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.993 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.252 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.511 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.512 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.771 rmmod nvme_tcp 00:06:05.771 rmmod nvme_fabrics 00:06:05.771 rmmod nvme_keyring 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 453604 ']' 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 453604 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 453604 ']' 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 453604 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453604 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453604' 00:06:05.771 killing process with pid 453604 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 453604 00:06:05.771 04:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 453604 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.031 04:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.568 00:06:08.568 real 0m47.653s 00:06:08.568 user 3m14.091s 00:06:08.568 sys 0m15.487s 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.568 ************************************ 00:06:08.568 END TEST nvmf_ns_hotplug_stress 00:06:08.568 ************************************ 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.568 ************************************ 00:06:08.568 START TEST nvmf_delete_subsystem 00:06:08.568 ************************************ 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.568 * Looking for test storage... 00:06:08.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.568 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.569 --rc genhtml_branch_coverage=1 00:06:08.569 --rc genhtml_function_coverage=1 00:06:08.569 --rc genhtml_legend=1 00:06:08.569 --rc geninfo_all_blocks=1 00:06:08.569 --rc geninfo_unexecuted_blocks=1 00:06:08.569 00:06:08.569 ' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.569 04:42:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.272 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:15.273 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:15.273 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:15.273 Found net devices under 0000:af:00.0: cvl_0_0 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:15.273 Found net devices under 0000:af:00.1: cvl_0_1 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:06:15.273 00:06:15.273 --- 10.0.0.2 ping statistics --- 00:06:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.273 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:06:15.273 00:06:15.273 --- 10.0.0.1 ping statistics --- 00:06:15.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.273 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.273 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=464037 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 464037 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 464037 ']' 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 [2024-12-10 04:43:05.456990] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:15.274 [2024-12-10 04:43:05.457034] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.274 [2024-12-10 04:43:05.534614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.274 [2024-12-10 04:43:05.571512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.274 [2024-12-10 04:43:05.571548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.274 [2024-12-10 04:43:05.571555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.274 [2024-12-10 04:43:05.571561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.274 [2024-12-10 04:43:05.571568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.274 [2024-12-10 04:43:05.572742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.274 [2024-12-10 04:43:05.572743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 [2024-12-10 04:43:05.717015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 [2024-12-10 04:43:05.737215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 NULL1 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 Delay0 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=464060 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:15.274 04:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.274 [2024-12-10 04:43:05.848141] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:16.650 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.650 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.650 04:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 [2024-12-10 04:43:07.923311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf80780 is same with the state(6) to be set 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 [2024-12-10 04:43:07.924364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf80b40 is same with the state(6) to be set 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 Write completed with error (sct=0, sc=8) 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.909 starting I/O failed: -6 00:06:16.909 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 starting I/O failed: -6 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Write completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:16.910 Read completed with error (sct=0, sc=8) 00:06:17.846 [2024-12-10 04:43:08.901222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf819b0 is same with the state(6) to be set 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 [2024-12-10 04:43:08.926514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf802c0 is same with the state(6) to be set 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 [2024-12-10 04:43:08.926863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf80960 is same with the state(6) to be set 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Read completed with error (sct=0, sc=8) 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.846 [2024-12-10 04:43:08.931417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f6800d350 is same with the state(6) to be set 00:06:17.846 Write completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Write completed with error (sct=0, sc=8) 00:06:17.847 Read completed with error (sct=0, sc=8) 00:06:17.847 [2024-12-10 04:43:08.931908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f6800d7c0 is same with the state(6) to be set 00:06:17.847 Initializing NVMe Controllers 00:06:17.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:17.847 Controller IO queue size 128, less than required. 00:06:17.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:17.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:17.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:17.847 Initialization complete. Launching workers. 00:06:17.847 ======================================================== 00:06:17.847 Latency(us) 00:06:17.847 Device Information : IOPS MiB/s Average min max 00:06:17.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.28 0.08 901807.66 878.75 1006050.83 00:06:17.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.80 0.08 925033.46 216.31 2003086.54 00:06:17.847 ======================================================== 00:06:17.847 Total : 330.07 0.16 913262.92 216.31 2003086.54 00:06:17.847 00:06:17.847 [2024-12-10 04:43:08.932400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf819b0 (9): Bad file descriptor 00:06:17.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:17.847 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.847 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:17.847 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 464060 00:06:17.847 04:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 464060 00:06:18.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (464060) - No such process 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 464060 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 464060 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 464060 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.414 [2024-12-10 04:43:09.461359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=464735 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:18.414 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.673 [2024-12-10 04:43:09.550016] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:18.931 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.931 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:18.931 04:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.498 04:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.498 04:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:19.498 04:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.065 04:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.065 04:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:20.065 04:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.632 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.632 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:20.632 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.891 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.891 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:20.891 04:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.460 04:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.460 04:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:21.460 04:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.718 Initializing NVMe Controllers 00:06:21.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.718 Controller IO queue size 128, less than required. 00:06:21.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:21.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:21.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:21.718 Initialization complete. Launching workers. 00:06:21.718 ======================================================== 00:06:21.718 Latency(us) 00:06:21.718 Device Information : IOPS MiB/s Average min max 00:06:21.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002727.85 1000115.17 1041782.73 00:06:21.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004197.92 1000196.20 1041615.70 00:06:21.718 ======================================================== 00:06:21.718 Total : 256.00 0.12 1003462.88 1000115.17 1041782.73 00:06:21.718 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464735 00:06:21.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (464735) - No such process 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 464735 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.977 rmmod nvme_tcp 00:06:21.977 rmmod nvme_fabrics 00:06:21.977 rmmod nvme_keyring 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 464037 ']' 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 464037 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 464037 ']' 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 464037 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.977 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464037 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464037' 00:06:22.236 killing process with pid 464037 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 464037 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 464037 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.236 04:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.775 00:06:24.775 real 0m16.187s 00:06:24.775 user 0m29.124s 00:06:24.775 sys 0m5.561s 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.775 ************************************ 00:06:24.775 END TEST nvmf_delete_subsystem 00:06:24.775 ************************************ 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.775 ************************************ 00:06:24.775 START TEST nvmf_host_management 00:06:24.775 ************************************ 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:24.775 * Looking for test storage... 00:06:24.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.775 --rc genhtml_branch_coverage=1 00:06:24.775 --rc genhtml_function_coverage=1 00:06:24.775 --rc genhtml_legend=1 00:06:24.775 --rc geninfo_all_blocks=1 00:06:24.775 --rc geninfo_unexecuted_blocks=1 00:06:24.775 00:06:24.775 ' 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.775 --rc genhtml_branch_coverage=1 00:06:24.775 --rc genhtml_function_coverage=1 00:06:24.775 --rc genhtml_legend=1 00:06:24.775 --rc geninfo_all_blocks=1 00:06:24.775 --rc geninfo_unexecuted_blocks=1 00:06:24.775 00:06:24.775 ' 00:06:24.775 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.776 --rc genhtml_branch_coverage=1 00:06:24.776 --rc genhtml_function_coverage=1 00:06:24.776 --rc genhtml_legend=1 00:06:24.776 --rc geninfo_all_blocks=1 00:06:24.776 --rc geninfo_unexecuted_blocks=1 00:06:24.776 00:06:24.776 ' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.776 --rc genhtml_branch_coverage=1 00:06:24.776 --rc genhtml_function_coverage=1 00:06:24.776 --rc genhtml_legend=1 00:06:24.776 --rc geninfo_all_blocks=1 00:06:24.776 --rc geninfo_unexecuted_blocks=1 00:06:24.776 00:06:24.776 ' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.776 04:43:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.346 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:31.347 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:31.347 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:31.347 Found net devices under 0000:af:00.0: cvl_0_0 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:31.347 Found net devices under 0000:af:00.1: cvl_0_1 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:06:31.347 00:06:31.347 --- 10.0.0.2 ping statistics --- 00:06:31.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.347 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:31.347 00:06:31.347 --- 10.0.0.1 ping statistics --- 00:06:31.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.347 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.347 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=468890 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 468890 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 468890 ']' 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.348 04:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.348 [2024-12-10 04:43:21.702649] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:31.348 [2024-12-10 04:43:21.702699] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.348 [2024-12-10 04:43:21.782885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.348 [2024-12-10 04:43:21.822632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.348 [2024-12-10 04:43:21.822673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.348 [2024-12-10 04:43:21.822679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.348 [2024-12-10 04:43:21.822685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.348 [2024-12-10 04:43:21.822689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.348 [2024-12-10 04:43:21.824051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.348 [2024-12-10 04:43:21.824083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.348 [2024-12-10 04:43:21.824242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.348 [2024-12-10 04:43:21.824243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.607 [2024-12-10 04:43:22.599249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.607 Malloc0 00:06:31.607 [2024-12-10 04:43:22.670930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=469153 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 469153 /var/tmp/bdevperf.sock 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 469153 ']' 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:31.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:31.607 { 00:06:31.607 "params": { 00:06:31.607 "name": "Nvme$subsystem", 00:06:31.607 "trtype": "$TEST_TRANSPORT", 00:06:31.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:31.607 "adrfam": "ipv4", 00:06:31.607 "trsvcid": "$NVMF_PORT", 00:06:31.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:31.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:31.607 "hdgst": ${hdgst:-false}, 00:06:31.607 "ddgst": ${ddgst:-false} 00:06:31.607 }, 00:06:31.607 "method": "bdev_nvme_attach_controller" 00:06:31.607 } 00:06:31.607 EOF 00:06:31.607 )") 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:31.607 04:43:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:31.607 "params": { 00:06:31.607 "name": "Nvme0", 00:06:31.607 "trtype": "tcp", 00:06:31.607 "traddr": "10.0.0.2", 00:06:31.607 "adrfam": "ipv4", 00:06:31.607 "trsvcid": "4420", 00:06:31.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:31.607 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:31.607 "hdgst": false, 00:06:31.607 "ddgst": false 00:06:31.607 }, 00:06:31.607 "method": "bdev_nvme_attach_controller" 00:06:31.607 }' 00:06:31.867 [2024-12-10 04:43:22.764211] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:31.867 [2024-12-10 04:43:22.764257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469153 ] 00:06:31.867 [2024-12-10 04:43:22.839022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.867 [2024-12-10 04:43:22.878582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.125 Running I/O for 10 seconds... 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:32.125 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:32.126 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=676 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 676 -ge 100 ']' 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.386 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.386 [2024-12-10 04:43:23.433628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.386 [2024-12-10 04:43:23.433959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.386 [2024-12-10 04:43:23.433966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.433974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.433981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.433988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.433995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.387 [2024-12-10 04:43:23.434545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.387 [2024-12-10 04:43:23.434553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.388 [2024-12-10 04:43:23.434559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.388 [2024-12-10 04:43:23.434566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.388 [2024-12-10 04:43:23.434573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.388 [2024-12-10 04:43:23.434581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.388 [2024-12-10 04:43:23.434587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.388 [2024-12-10 04:43:23.434595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.388 [2024-12-10 04:43:23.434602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.388 [2024-12-10 04:43:23.435550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:32.388 task offset: 102144 on job bdev=Nvme0n1 fails 00:06:32.388 00:06:32.388 Latency(us) 00:06:32.388 [2024-12-10T03:43:23.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.388 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:32.388 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:32.388 Verification LBA range: start 0x0 length 0x400 00:06:32.388 Nvme0n1 : 0.40 1939.55 121.22 161.63 0.00 29638.32 1451.15 26963.38 00:06:32.388 [2024-12-10T03:43:23.525Z] =================================================================================================================== 00:06:32.388 [2024-12-10T03:43:23.525Z] Total : 1939.55 121.22 161.63 0.00 29638.32 1451.15 26963.38 00:06:32.388 [2024-12-10 04:43:23.437896] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.388 [2024-12-10 04:43:23.437917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c40760 (9): Bad file descriptor 00:06:32.388 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.388 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:32.388 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.388 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.388 [2024-12-10 04:43:23.443255] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:32.388 [2024-12-10 04:43:23.443338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:32.388 [2024-12-10 04:43:23.443360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:32.388 [2024-12-10 04:43:23.443375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:32.388 [2024-12-10 04:43:23.443382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:32.388 [2024-12-10 04:43:23.443389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:32.388 [2024-12-10 04:43:23.443395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c40760 00:06:32.388 [2024-12-10 04:43:23.443414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c40760 (9): Bad file descriptor 00:06:32.388 [2024-12-10 04:43:23.443425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:32.388 [2024-12-10 04:43:23.443431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:32.388 [2024-12-10 04:43:23.443438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:32.388 [2024-12-10 04:43:23.443446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:32.388 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.388 04:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:33.324 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 469153 00:06:33.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (469153) - No such process 00:06:33.324 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:33.324 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:33.583 { 00:06:33.583 "params": { 00:06:33.583 "name": "Nvme$subsystem", 00:06:33.583 "trtype": "$TEST_TRANSPORT", 00:06:33.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.583 "adrfam": "ipv4", 00:06:33.583 "trsvcid": "$NVMF_PORT", 00:06:33.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.583 "hdgst": ${hdgst:-false}, 00:06:33.583 "ddgst": ${ddgst:-false} 00:06:33.583 }, 00:06:33.583 "method": "bdev_nvme_attach_controller" 00:06:33.583 } 00:06:33.583 EOF 00:06:33.583 )") 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:33.583 04:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:33.583 "params": { 00:06:33.583 "name": "Nvme0", 00:06:33.583 "trtype": "tcp", 00:06:33.584 "traddr": "10.0.0.2", 00:06:33.584 "adrfam": "ipv4", 00:06:33.584 "trsvcid": "4420", 00:06:33.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.584 "hdgst": false, 00:06:33.584 "ddgst": false 00:06:33.584 }, 00:06:33.584 "method": "bdev_nvme_attach_controller" 00:06:33.584 }' 00:06:33.584 [2024-12-10 04:43:24.503322] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:33.584 [2024-12-10 04:43:24.503371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469401 ] 00:06:33.584 [2024-12-10 04:43:24.576881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.584 [2024-12-10 04:43:24.614504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.841 Running I/O for 1 seconds... 00:06:35.218 2048.00 IOPS, 128.00 MiB/s 00:06:35.218 Latency(us) 00:06:35.218 [2024-12-10T03:43:26.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.218 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.218 Verification LBA range: start 0x0 length 0x400 00:06:35.218 Nvme0n1 : 1.01 2081.65 130.10 0.00 0.00 30264.12 4337.86 26713.72 00:06:35.218 [2024-12-10T03:43:26.355Z] =================================================================================================================== 00:06:35.218 [2024-12-10T03:43:26.355Z] Total : 2081.65 130.10 0.00 0.00 30264.12 4337.86 26713.72 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.218 rmmod nvme_tcp 00:06:35.218 rmmod nvme_fabrics 00:06:35.218 rmmod nvme_keyring 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 468890 ']' 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 468890 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 468890 ']' 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 468890 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468890 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468890' 00:06:35.218 killing process with pid 468890 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 468890 00:06:35.218 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 468890 00:06:35.477 [2024-12-10 04:43:26.389760] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.477 04:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.381 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.381 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:37.381 00:06:37.381 real 0m13.035s 00:06:37.381 user 0m22.441s 00:06:37.381 sys 0m5.564s 00:06:37.381 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.381 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.381 ************************************ 00:06:37.381 END TEST nvmf_host_management 00:06:37.381 ************************************ 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.640 ************************************ 00:06:37.640 START TEST nvmf_lvol 00:06:37.640 ************************************ 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:37.640 * Looking for test storage... 00:06:37.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:37.640 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.641 --rc genhtml_branch_coverage=1 00:06:37.641 --rc genhtml_function_coverage=1 00:06:37.641 --rc genhtml_legend=1 00:06:37.641 --rc geninfo_all_blocks=1 00:06:37.641 --rc geninfo_unexecuted_blocks=1 00:06:37.641 00:06:37.641 ' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.641 --rc genhtml_branch_coverage=1 00:06:37.641 --rc genhtml_function_coverage=1 00:06:37.641 --rc genhtml_legend=1 00:06:37.641 --rc geninfo_all_blocks=1 00:06:37.641 --rc geninfo_unexecuted_blocks=1 00:06:37.641 00:06:37.641 ' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.641 --rc genhtml_branch_coverage=1 00:06:37.641 --rc genhtml_function_coverage=1 00:06:37.641 --rc genhtml_legend=1 00:06:37.641 --rc geninfo_all_blocks=1 00:06:37.641 --rc geninfo_unexecuted_blocks=1 00:06:37.641 00:06:37.641 ' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.641 --rc genhtml_branch_coverage=1 00:06:37.641 --rc genhtml_function_coverage=1 00:06:37.641 --rc genhtml_legend=1 00:06:37.641 --rc geninfo_all_blocks=1 00:06:37.641 --rc geninfo_unexecuted_blocks=1 00:06:37.641 00:06:37.641 ' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.641 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.900 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.901 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.901 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.901 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.901 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.901 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.901 04:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:44.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.471 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:44.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:44.472 Found net devices under 0000:af:00.0: cvl_0_0 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:44.472 Found net devices under 0000:af:00.1: cvl_0_1 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:44.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:06:44.472 00:06:44.472 --- 10.0.0.2 ping statistics --- 00:06:44.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.472 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:06:44.472 00:06:44.472 --- 10.0.0.1 ping statistics --- 00:06:44.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.472 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=473157 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 473157 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 473157 ']' 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.472 [2024-12-10 04:43:34.766702] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:44.472 [2024-12-10 04:43:34.766753] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.472 [2024-12-10 04:43:34.845279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.472 [2024-12-10 04:43:34.884381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.472 [2024-12-10 04:43:34.884419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.472 [2024-12-10 04:43:34.884426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.472 [2024-12-10 04:43:34.884431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.472 [2024-12-10 04:43:34.884436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.472 [2024-12-10 04:43:34.885732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.472 [2024-12-10 04:43:34.885837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.472 [2024-12-10 04:43:34.885839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.472 04:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.472 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.472 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.472 [2024-12-10 04:43:35.194948] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.472 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.473 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:44.473 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.731 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:44.731 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:44.731 04:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:44.990 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=10ea130d-4d89-4e9f-b6c0-152f22888835 00:06:44.990 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10ea130d-4d89-4e9f-b6c0-152f22888835 lvol 20 00:06:45.249 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e6b9da22-51cf-41ff-9dad-9802d2cb9303 00:06:45.249 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.507 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6b9da22-51cf-41ff-9dad-9802d2cb9303 00:06:45.765 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.765 [2024-12-10 04:43:36.839759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.765 04:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.023 04:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=473587 00:06:46.023 04:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:46.023 04:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:46.960 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e6b9da22-51cf-41ff-9dad-9802d2cb9303 MY_SNAPSHOT 00:06:47.218 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7d1a10cc-3b74-4002-9951-5bd727455f2c 00:06:47.218 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e6b9da22-51cf-41ff-9dad-9802d2cb9303 30 00:06:47.477 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7d1a10cc-3b74-4002-9951-5bd727455f2c MY_CLONE 00:06:47.736 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=45d8736c-682d-41ce-97da-c711da551202 00:06:47.736 04:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 45d8736c-682d-41ce-97da-c711da551202 00:06:48.303 04:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 473587 00:06:56.421 Initializing NVMe Controllers 00:06:56.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:56.421 Controller IO queue size 128, less than required. 00:06:56.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:56.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:56.421 Initialization complete. Launching workers. 00:06:56.421 ======================================================== 00:06:56.421 Latency(us) 00:06:56.421 Device Information : IOPS MiB/s Average min max 00:06:56.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12085.20 47.21 10595.19 1390.69 100224.60 00:06:56.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11994.50 46.85 10672.33 3876.58 48107.50 00:06:56.421 ======================================================== 00:06:56.421 Total : 24079.70 94.06 10633.61 1390.69 100224.60 00:06:56.421 00:06:56.421 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.680 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6b9da22-51cf-41ff-9dad-9802d2cb9303 00:06:56.939 04:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10ea130d-4d89-4e9f-b6c0-152f22888835 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.198 rmmod nvme_tcp 00:06:57.198 rmmod nvme_fabrics 00:06:57.198 rmmod nvme_keyring 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 473157 ']' 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 473157 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 473157 ']' 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 473157 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473157 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473157' 00:06:57.198 killing process with pid 473157 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 473157 00:06:57.198 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 473157 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.457 04:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:59.994 00:06:59.994 real 0m21.937s 00:06:59.994 user 1m3.254s 00:06:59.994 sys 0m7.691s 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.994 ************************************ 00:06:59.994 END TEST nvmf_lvol 00:06:59.994 ************************************ 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.994 ************************************ 00:06:59.994 START TEST nvmf_lvs_grow 00:06:59.994 ************************************ 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.994 * Looking for test storage... 00:06:59.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.994 --rc genhtml_branch_coverage=1 00:06:59.994 --rc genhtml_function_coverage=1 00:06:59.994 --rc genhtml_legend=1 00:06:59.994 --rc geninfo_all_blocks=1 00:06:59.994 --rc geninfo_unexecuted_blocks=1 00:06:59.994 00:06:59.994 ' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.994 --rc genhtml_branch_coverage=1 00:06:59.994 --rc genhtml_function_coverage=1 00:06:59.994 --rc genhtml_legend=1 00:06:59.994 --rc geninfo_all_blocks=1 00:06:59.994 --rc geninfo_unexecuted_blocks=1 00:06:59.994 00:06:59.994 ' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.994 --rc genhtml_branch_coverage=1 00:06:59.994 --rc genhtml_function_coverage=1 00:06:59.994 --rc genhtml_legend=1 00:06:59.994 --rc geninfo_all_blocks=1 00:06:59.994 --rc geninfo_unexecuted_blocks=1 00:06:59.994 00:06:59.994 ' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.994 --rc genhtml_branch_coverage=1 00:06:59.994 --rc genhtml_function_coverage=1 00:06:59.994 --rc genhtml_legend=1 00:06:59.994 --rc geninfo_all_blocks=1 00:06:59.994 --rc geninfo_unexecuted_blocks=1 00:06:59.994 00:06:59.994 ' 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.994 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:59.995 04:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.565 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:06.566 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:06.566 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:06.566 Found net devices under 0000:af:00.0: cvl_0_0 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:06.566 Found net devices under 0000:af:00.1: cvl_0_1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:07:06.566 00:07:06.566 --- 10.0.0.2 ping statistics --- 00:07:06.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.566 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:07:06.566 00:07:06.566 --- 10.0.0.1 ping statistics --- 00:07:06.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.566 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=479071 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 479071 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 479071 ']' 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.566 04:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.566 [2024-12-10 04:43:56.928773] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:06.566 [2024-12-10 04:43:56.928817] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.566 [2024-12-10 04:43:57.005249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.567 [2024-12-10 04:43:57.042535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.567 [2024-12-10 04:43:57.042572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.567 [2024-12-10 04:43:57.042579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.567 [2024-12-10 04:43:57.042584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.567 [2024-12-10 04:43:57.042589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.567 [2024-12-10 04:43:57.043059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.567 [2024-12-10 04:43:57.350509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.567 ************************************ 00:07:06.567 START TEST lvs_grow_clean 00:07:06.567 ************************************ 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.567 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:06.826 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:06.826 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:06.826 04:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.085 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.085 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.085 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf lvol 150 00:07:07.085 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8dcab520-4532-4ded-a8e7-cd2963f78635 00:07:07.085 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.085 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:07.344 [2024-12-10 04:43:58.366055] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:07.344 [2024-12-10 04:43:58.366106] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:07.344 true 00:07:07.344 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:07.344 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:07.602 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:07.602 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:07.861 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8dcab520-4532-4ded-a8e7-cd2963f78635 00:07:07.861 04:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.120 [2024-12-10 04:43:59.104277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.120 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=479449 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 479449 /var/tmp/bdevperf.sock 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 479449 ']' 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.379 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:08.379 [2024-12-10 04:43:59.325112] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:08.379 [2024-12-10 04:43:59.325165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479449 ] 00:07:08.379 [2024-12-10 04:43:59.399075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.380 [2024-12-10 04:43:59.445615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.638 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.638 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:08.638 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:08.897 Nvme0n1 00:07:08.897 04:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:08.897 [ 00:07:08.897 { 00:07:08.897 "name": "Nvme0n1", 00:07:08.897 "aliases": [ 00:07:08.897 "8dcab520-4532-4ded-a8e7-cd2963f78635" 00:07:08.897 ], 00:07:08.897 "product_name": "NVMe disk", 00:07:08.897 "block_size": 4096, 00:07:08.897 "num_blocks": 38912, 00:07:08.897 "uuid": "8dcab520-4532-4ded-a8e7-cd2963f78635", 00:07:08.897 "numa_id": 1, 00:07:08.897 "assigned_rate_limits": { 00:07:08.897 "rw_ios_per_sec": 0, 00:07:08.897 "rw_mbytes_per_sec": 0, 00:07:08.897 "r_mbytes_per_sec": 0, 00:07:08.897 "w_mbytes_per_sec": 0 00:07:08.897 }, 00:07:08.897 "claimed": false, 00:07:08.897 "zoned": false, 00:07:08.897 "supported_io_types": { 00:07:08.897 "read": true, 00:07:08.897 "write": true, 00:07:08.897 "unmap": true, 00:07:08.897 "flush": true, 00:07:08.897 "reset": true, 00:07:08.897 "nvme_admin": true, 00:07:08.897 "nvme_io": true, 00:07:08.897 "nvme_io_md": false, 00:07:08.897 "write_zeroes": true, 00:07:08.897 "zcopy": false, 00:07:08.897 "get_zone_info": false, 00:07:08.897 "zone_management": false, 00:07:08.897 "zone_append": false, 00:07:08.897 "compare": true, 00:07:08.897 "compare_and_write": true, 00:07:08.897 "abort": true, 00:07:08.897 "seek_hole": false, 00:07:08.897 "seek_data": false, 00:07:08.897 "copy": true, 00:07:08.897 "nvme_iov_md": false 00:07:08.897 }, 00:07:08.897 "memory_domains": [ 00:07:08.897 { 00:07:08.897 "dma_device_id": "system", 00:07:08.897 "dma_device_type": 1 00:07:08.897 } 00:07:08.897 ], 00:07:08.897 "driver_specific": { 00:07:08.897 "nvme": [ 00:07:08.897 { 00:07:08.897 "trid": { 00:07:08.897 "trtype": "TCP", 00:07:08.897 "adrfam": "IPv4", 00:07:08.897 "traddr": "10.0.0.2", 00:07:08.897 "trsvcid": "4420", 00:07:08.897 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:08.897 }, 00:07:08.897 "ctrlr_data": { 00:07:08.897 "cntlid": 1, 00:07:08.897 "vendor_id": "0x8086", 00:07:08.897 "model_number": "SPDK bdev Controller", 00:07:08.898 "serial_number": "SPDK0", 00:07:08.898 "firmware_revision": "25.01", 00:07:08.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.898 "oacs": { 00:07:08.898 "security": 0, 00:07:08.898 "format": 0, 00:07:08.898 "firmware": 0, 00:07:08.898 "ns_manage": 0 00:07:08.898 }, 00:07:08.898 "multi_ctrlr": true, 00:07:08.898 "ana_reporting": false 00:07:08.898 }, 00:07:08.898 "vs": { 00:07:08.898 "nvme_version": "1.3" 00:07:08.898 }, 00:07:08.898 "ns_data": { 00:07:08.898 "id": 1, 00:07:08.898 "can_share": true 00:07:08.898 } 00:07:08.898 } 00:07:08.898 ], 00:07:08.898 "mp_policy": "active_passive" 00:07:08.898 } 00:07:08.898 } 00:07:08.898 ] 00:07:09.155 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=479574 00:07:09.155 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:09.155 04:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:09.155 Running I/O for 10 seconds... 00:07:10.091 Latency(us) 00:07:10.091 [2024-12-10T03:44:01.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.091 Nvme0n1 : 1.00 23244.00 90.80 0.00 0.00 0.00 0.00 0.00 00:07:10.091 [2024-12-10T03:44:01.228Z] =================================================================================================================== 00:07:10.091 [2024-12-10T03:44:01.228Z] Total : 23244.00 90.80 0.00 0.00 0.00 0.00 0.00 00:07:10.091 00:07:11.026 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:11.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.026 Nvme0n1 : 2.00 23500.50 91.80 0.00 0.00 0.00 0.00 0.00 00:07:11.026 [2024-12-10T03:44:02.163Z] =================================================================================================================== 00:07:11.026 [2024-12-10T03:44:02.163Z] Total : 23500.50 91.80 0.00 0.00 0.00 0.00 0.00 00:07:11.026 00:07:11.287 true 00:07:11.287 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:11.287 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:11.597 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:11.597 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:11.597 04:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 479574 00:07:12.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.241 Nvme0n1 : 3.00 23523.00 91.89 0.00 0.00 0.00 0.00 0.00 00:07:12.241 [2024-12-10T03:44:03.378Z] =================================================================================================================== 00:07:12.241 [2024-12-10T03:44:03.378Z] Total : 23523.00 91.89 0.00 0.00 0.00 0.00 0.00 00:07:12.241 00:07:13.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.178 Nvme0n1 : 4.00 23616.75 92.25 0.00 0.00 0.00 0.00 0.00 00:07:13.178 [2024-12-10T03:44:04.315Z] =================================================================================================================== 00:07:13.178 [2024-12-10T03:44:04.315Z] Total : 23616.75 92.25 0.00 0.00 0.00 0.00 0.00 00:07:13.178 00:07:14.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.113 Nvme0n1 : 5.00 23671.20 92.47 0.00 0.00 0.00 0.00 0.00 00:07:14.113 [2024-12-10T03:44:05.250Z] =================================================================================================================== 00:07:14.113 [2024-12-10T03:44:05.250Z] Total : 23671.20 92.47 0.00 0.00 0.00 0.00 0.00 00:07:14.113 00:07:15.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.049 Nvme0n1 : 6.00 23712.33 92.63 0.00 0.00 0.00 0.00 0.00 00:07:15.049 [2024-12-10T03:44:06.186Z] =================================================================================================================== 00:07:15.049 [2024-12-10T03:44:06.186Z] Total : 23712.33 92.63 0.00 0.00 0.00 0.00 0.00 00:07:15.049 00:07:16.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.426 Nvme0n1 : 7.00 23754.29 92.79 0.00 0.00 0.00 0.00 0.00 00:07:16.426 [2024-12-10T03:44:07.563Z] =================================================================================================================== 00:07:16.426 [2024-12-10T03:44:07.563Z] Total : 23754.29 92.79 0.00 0.00 0.00 0.00 0.00 00:07:16.426 00:07:17.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.363 Nvme0n1 : 8.00 23769.88 92.85 0.00 0.00 0.00 0.00 0.00 00:07:17.363 [2024-12-10T03:44:08.500Z] =================================================================================================================== 00:07:17.363 [2024-12-10T03:44:08.500Z] Total : 23769.88 92.85 0.00 0.00 0.00 0.00 0.00 00:07:17.363 00:07:18.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.299 Nvme0n1 : 9.00 23791.44 92.94 0.00 0.00 0.00 0.00 0.00 00:07:18.299 [2024-12-10T03:44:09.436Z] =================================================================================================================== 00:07:18.299 [2024-12-10T03:44:09.436Z] Total : 23791.44 92.94 0.00 0.00 0.00 0.00 0.00 00:07:18.299 00:07:19.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.236 Nvme0n1 : 10.00 23801.30 92.97 0.00 0.00 0.00 0.00 0.00 00:07:19.236 [2024-12-10T03:44:10.373Z] =================================================================================================================== 00:07:19.236 [2024-12-10T03:44:10.373Z] Total : 23801.30 92.97 0.00 0.00 0.00 0.00 0.00 00:07:19.236 00:07:19.236 00:07:19.236 Latency(us) 00:07:19.236 [2024-12-10T03:44:10.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.236 Nvme0n1 : 10.00 23799.11 92.97 0.00 0.00 5374.98 2153.33 10673.01 00:07:19.236 [2024-12-10T03:44:10.373Z] =================================================================================================================== 00:07:19.236 [2024-12-10T03:44:10.373Z] Total : 23799.11 92.97 0.00 0.00 5374.98 2153.33 10673.01 00:07:19.236 { 00:07:19.236 "results": [ 00:07:19.236 { 00:07:19.236 "job": "Nvme0n1", 00:07:19.236 "core_mask": "0x2", 00:07:19.236 "workload": "randwrite", 00:07:19.236 "status": "finished", 00:07:19.236 "queue_depth": 128, 00:07:19.236 "io_size": 4096, 00:07:19.236 "runtime": 10.003651, 00:07:19.236 "iops": 23799.11094459413, 00:07:19.236 "mibps": 92.96527712732082, 00:07:19.236 "io_failed": 0, 00:07:19.236 "io_timeout": 0, 00:07:19.236 "avg_latency_us": 5374.975077923642, 00:07:19.236 "min_latency_us": 2153.325714285714, 00:07:19.236 "max_latency_us": 10673.005714285715 00:07:19.236 } 00:07:19.236 ], 00:07:19.236 "core_count": 1 00:07:19.236 } 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 479449 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 479449 ']' 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 479449 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479449 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479449' 00:07:19.236 killing process with pid 479449 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 479449 00:07:19.236 Received shutdown signal, test time was about 10.000000 seconds 00:07:19.236 00:07:19.236 Latency(us) 00:07:19.236 [2024-12-10T03:44:10.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.236 [2024-12-10T03:44:10.373Z] =================================================================================================================== 00:07:19.236 [2024-12-10T03:44:10.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:19.236 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 479449 00:07:19.495 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.495 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.754 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:19.754 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:20.012 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:20.012 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:20.012 04:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:20.271 [2024-12-10 04:44:11.152864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:20.271 request: 00:07:20.271 { 00:07:20.271 "uuid": "2d4561e5-808d-4dd0-b55a-2d9c17d821cf", 00:07:20.271 "method": "bdev_lvol_get_lvstores", 00:07:20.271 "req_id": 1 00:07:20.271 } 00:07:20.271 Got JSON-RPC error response 00:07:20.271 response: 00:07:20.271 { 00:07:20.271 "code": -19, 00:07:20.271 "message": "No such device" 00:07:20.271 } 00:07:20.271 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:20.272 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.272 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.272 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.272 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:20.530 aio_bdev 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8dcab520-4532-4ded-a8e7-cd2963f78635 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8dcab520-4532-4ded-a8e7-cd2963f78635 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.530 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:20.790 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8dcab520-4532-4ded-a8e7-cd2963f78635 -t 2000 00:07:20.790 [ 00:07:20.790 { 00:07:20.790 "name": "8dcab520-4532-4ded-a8e7-cd2963f78635", 00:07:20.790 "aliases": [ 00:07:20.790 "lvs/lvol" 00:07:20.790 ], 00:07:20.790 "product_name": "Logical Volume", 00:07:20.790 "block_size": 4096, 00:07:20.790 "num_blocks": 38912, 00:07:20.790 "uuid": "8dcab520-4532-4ded-a8e7-cd2963f78635", 00:07:20.790 "assigned_rate_limits": { 00:07:20.790 "rw_ios_per_sec": 0, 00:07:20.790 "rw_mbytes_per_sec": 0, 00:07:20.790 "r_mbytes_per_sec": 0, 00:07:20.790 "w_mbytes_per_sec": 0 00:07:20.790 }, 00:07:20.790 "claimed": false, 00:07:20.790 "zoned": false, 00:07:20.790 "supported_io_types": { 00:07:20.790 "read": true, 00:07:20.790 "write": true, 00:07:20.790 "unmap": true, 00:07:20.790 "flush": false, 00:07:20.790 "reset": true, 00:07:20.790 "nvme_admin": false, 00:07:20.790 "nvme_io": false, 00:07:20.790 "nvme_io_md": false, 00:07:20.790 "write_zeroes": true, 00:07:20.790 "zcopy": false, 00:07:20.790 "get_zone_info": false, 00:07:20.790 "zone_management": false, 00:07:20.790 "zone_append": false, 00:07:20.790 "compare": false, 00:07:20.790 "compare_and_write": false, 00:07:20.790 "abort": false, 00:07:20.790 "seek_hole": true, 00:07:20.790 "seek_data": true, 00:07:20.790 "copy": false, 00:07:20.790 "nvme_iov_md": false 00:07:20.790 }, 00:07:20.790 "driver_specific": { 00:07:20.790 "lvol": { 00:07:20.790 "lvol_store_uuid": "2d4561e5-808d-4dd0-b55a-2d9c17d821cf", 00:07:20.790 "base_bdev": "aio_bdev", 00:07:20.790 "thin_provision": false, 00:07:20.790 "num_allocated_clusters": 38, 00:07:20.790 "snapshot": false, 00:07:20.790 "clone": false, 00:07:20.790 "esnap_clone": false 00:07:20.790 } 00:07:20.790 } 00:07:20.790 } 00:07:20.790 ] 00:07:20.790 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:20.790 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:20.790 04:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:21.049 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:21.049 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:21.049 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:21.308 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:21.308 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8dcab520-4532-4ded-a8e7-cd2963f78635 00:07:21.567 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2d4561e5-808d-4dd0-b55a-2d9c17d821cf 00:07:21.567 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.827 00:07:21.827 real 0m15.478s 00:07:21.827 user 0m15.139s 00:07:21.827 sys 0m1.410s 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:21.827 ************************************ 00:07:21.827 END TEST lvs_grow_clean 00:07:21.827 ************************************ 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.827 ************************************ 00:07:21.827 START TEST lvs_grow_dirty 00:07:21.827 ************************************ 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:21.827 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.086 04:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.086 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:22.086 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.345 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:22.345 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:22.345 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.604 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.604 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.604 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 lvol 150 00:07:22.863 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:22.863 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.863 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.863 [2024-12-10 04:44:13.913138] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.863 [2024-12-10 04:44:13.913192] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.863 true 00:07:22.863 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:22.863 04:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:23.122 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.122 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.381 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:23.381 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.640 [2024-12-10 04:44:14.639306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.640 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=482109 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 482109 /var/tmp/bdevperf.sock 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 482109 ']' 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.900 04:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.900 [2024-12-10 04:44:14.848196] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:23.900 [2024-12-10 04:44:14.848238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482109 ] 00:07:23.900 [2024-12-10 04:44:14.920471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.900 [2024-12-10 04:44:14.959543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.167 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.167 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:24.167 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.425 Nvme0n1 00:07:24.425 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:24.425 [ 00:07:24.425 { 00:07:24.425 "name": "Nvme0n1", 00:07:24.425 "aliases": [ 00:07:24.425 "0ea56d19-07f7-4d0e-aae7-1d4b6be59291" 00:07:24.425 ], 00:07:24.425 "product_name": "NVMe disk", 00:07:24.425 "block_size": 4096, 00:07:24.425 "num_blocks": 38912, 00:07:24.425 "uuid": "0ea56d19-07f7-4d0e-aae7-1d4b6be59291", 00:07:24.425 "numa_id": 1, 00:07:24.425 "assigned_rate_limits": { 00:07:24.425 "rw_ios_per_sec": 0, 00:07:24.425 "rw_mbytes_per_sec": 0, 00:07:24.425 "r_mbytes_per_sec": 0, 00:07:24.425 "w_mbytes_per_sec": 0 00:07:24.425 }, 00:07:24.425 "claimed": false, 00:07:24.425 "zoned": false, 00:07:24.425 "supported_io_types": { 00:07:24.425 "read": true, 00:07:24.425 "write": true, 00:07:24.425 "unmap": true, 00:07:24.425 "flush": true, 00:07:24.425 "reset": true, 00:07:24.425 "nvme_admin": true, 00:07:24.425 "nvme_io": true, 00:07:24.425 "nvme_io_md": false, 00:07:24.425 "write_zeroes": true, 00:07:24.425 "zcopy": false, 00:07:24.425 "get_zone_info": false, 00:07:24.425 "zone_management": false, 00:07:24.425 "zone_append": false, 00:07:24.425 "compare": true, 00:07:24.425 "compare_and_write": true, 00:07:24.425 "abort": true, 00:07:24.425 "seek_hole": false, 00:07:24.425 "seek_data": false, 00:07:24.425 "copy": true, 00:07:24.425 "nvme_iov_md": false 00:07:24.425 }, 00:07:24.425 "memory_domains": [ 00:07:24.425 { 00:07:24.425 "dma_device_id": "system", 00:07:24.425 "dma_device_type": 1 00:07:24.425 } 00:07:24.425 ], 00:07:24.425 "driver_specific": { 00:07:24.425 "nvme": [ 00:07:24.425 { 00:07:24.425 "trid": { 00:07:24.425 "trtype": "TCP", 00:07:24.425 "adrfam": "IPv4", 00:07:24.425 "traddr": "10.0.0.2", 00:07:24.425 "trsvcid": "4420", 00:07:24.425 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:24.425 }, 00:07:24.425 "ctrlr_data": { 00:07:24.425 "cntlid": 1, 00:07:24.426 "vendor_id": "0x8086", 00:07:24.426 "model_number": "SPDK bdev Controller", 00:07:24.426 "serial_number": "SPDK0", 00:07:24.426 "firmware_revision": "25.01", 00:07:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.426 "oacs": { 00:07:24.426 "security": 0, 00:07:24.426 "format": 0, 00:07:24.426 "firmware": 0, 00:07:24.426 "ns_manage": 0 00:07:24.426 }, 00:07:24.426 "multi_ctrlr": true, 00:07:24.426 "ana_reporting": false 00:07:24.426 }, 00:07:24.426 "vs": { 00:07:24.426 "nvme_version": "1.3" 00:07:24.426 }, 00:07:24.426 "ns_data": { 00:07:24.426 "id": 1, 00:07:24.426 "can_share": true 00:07:24.426 } 00:07:24.426 } 00:07:24.426 ], 00:07:24.426 "mp_policy": "active_passive" 00:07:24.426 } 00:07:24.426 } 00:07:24.426 ] 00:07:24.426 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=482165 00:07:24.426 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.426 04:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:24.684 Running I/O for 10 seconds... 00:07:25.621 Latency(us) 00:07:25.621 [2024-12-10T03:44:16.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.621 Nvme0n1 : 1.00 23387.00 91.36 0.00 0.00 0.00 0.00 0.00 00:07:25.621 [2024-12-10T03:44:16.758Z] =================================================================================================================== 00:07:25.621 [2024-12-10T03:44:16.758Z] Total : 23387.00 91.36 0.00 0.00 0.00 0.00 0.00 00:07:25.621 00:07:26.557 04:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:26.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.557 Nvme0n1 : 2.00 23557.00 92.02 0.00 0.00 0.00 0.00 0.00 00:07:26.557 [2024-12-10T03:44:17.694Z] =================================================================================================================== 00:07:26.557 [2024-12-10T03:44:17.694Z] Total : 23557.00 92.02 0.00 0.00 0.00 0.00 0.00 00:07:26.557 00:07:26.816 true 00:07:26.816 04:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:26.816 04:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.074 04:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.075 04:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.075 04:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 482165 00:07:27.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.642 Nvme0n1 : 3.00 23601.33 92.19 0.00 0.00 0.00 0.00 0.00 00:07:27.642 [2024-12-10T03:44:18.779Z] =================================================================================================================== 00:07:27.642 [2024-12-10T03:44:18.779Z] Total : 23601.33 92.19 0.00 0.00 0.00 0.00 0.00 00:07:27.642 00:07:28.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.579 Nvme0n1 : 4.00 23672.50 92.47 0.00 0.00 0.00 0.00 0.00 00:07:28.579 [2024-12-10T03:44:19.716Z] =================================================================================================================== 00:07:28.579 [2024-12-10T03:44:19.716Z] Total : 23672.50 92.47 0.00 0.00 0.00 0.00 0.00 00:07:28.579 00:07:29.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.956 Nvme0n1 : 5.00 23637.60 92.33 0.00 0.00 0.00 0.00 0.00 00:07:29.956 [2024-12-10T03:44:21.093Z] =================================================================================================================== 00:07:29.956 [2024-12-10T03:44:21.093Z] Total : 23637.60 92.33 0.00 0.00 0.00 0.00 0.00 00:07:29.956 00:07:30.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.893 Nvme0n1 : 6.00 23651.17 92.39 0.00 0.00 0.00 0.00 0.00 00:07:30.893 [2024-12-10T03:44:22.030Z] =================================================================================================================== 00:07:30.893 [2024-12-10T03:44:22.030Z] Total : 23651.17 92.39 0.00 0.00 0.00 0.00 0.00 00:07:30.893 00:07:31.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.830 Nvme0n1 : 7.00 23686.71 92.53 0.00 0.00 0.00 0.00 0.00 00:07:31.830 [2024-12-10T03:44:22.967Z] =================================================================================================================== 00:07:31.830 [2024-12-10T03:44:22.967Z] Total : 23686.71 92.53 0.00 0.00 0.00 0.00 0.00 00:07:31.830 00:07:32.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.767 Nvme0n1 : 8.00 23724.38 92.67 0.00 0.00 0.00 0.00 0.00 00:07:32.767 [2024-12-10T03:44:23.904Z] =================================================================================================================== 00:07:32.767 [2024-12-10T03:44:23.904Z] Total : 23724.38 92.67 0.00 0.00 0.00 0.00 0.00 00:07:32.767 00:07:33.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.704 Nvme0n1 : 9.00 23747.11 92.76 0.00 0.00 0.00 0.00 0.00 00:07:33.704 [2024-12-10T03:44:24.841Z] =================================================================================================================== 00:07:33.704 [2024-12-10T03:44:24.841Z] Total : 23747.11 92.76 0.00 0.00 0.00 0.00 0.00 00:07:33.704 00:07:34.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.640 Nvme0n1 : 10.00 23773.00 92.86 0.00 0.00 0.00 0.00 0.00 00:07:34.640 [2024-12-10T03:44:25.777Z] =================================================================================================================== 00:07:34.640 [2024-12-10T03:44:25.777Z] Total : 23773.00 92.86 0.00 0.00 0.00 0.00 0.00 00:07:34.640 00:07:34.640 00:07:34.640 Latency(us) 00:07:34.640 [2024-12-10T03:44:25.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.640 Nvme0n1 : 10.00 23774.26 92.87 0.00 0.00 5381.18 3105.16 11359.57 00:07:34.640 [2024-12-10T03:44:25.777Z] =================================================================================================================== 00:07:34.640 [2024-12-10T03:44:25.777Z] Total : 23774.26 92.87 0.00 0.00 5381.18 3105.16 11359.57 00:07:34.640 { 00:07:34.640 "results": [ 00:07:34.640 { 00:07:34.640 "job": "Nvme0n1", 00:07:34.640 "core_mask": "0x2", 00:07:34.640 "workload": "randwrite", 00:07:34.640 "status": "finished", 00:07:34.640 "queue_depth": 128, 00:07:34.640 "io_size": 4096, 00:07:34.640 "runtime": 10.004856, 00:07:34.640 "iops": 23774.25522166436, 00:07:34.640 "mibps": 92.8681844596264, 00:07:34.640 "io_failed": 0, 00:07:34.640 "io_timeout": 0, 00:07:34.640 "avg_latency_us": 5381.1818149364035, 00:07:34.640 "min_latency_us": 3105.158095238095, 00:07:34.640 "max_latency_us": 11359.573333333334 00:07:34.640 } 00:07:34.640 ], 00:07:34.640 "core_count": 1 00:07:34.640 } 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 482109 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 482109 ']' 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 482109 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482109 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:34.640 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:34.641 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482109' 00:07:34.641 killing process with pid 482109 00:07:34.641 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 482109 00:07:34.641 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.641 00:07:34.641 Latency(us) 00:07:34.641 [2024-12-10T03:44:25.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.641 [2024-12-10T03:44:25.778Z] =================================================================================================================== 00:07:34.641 [2024-12-10T03:44:25.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.641 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 482109 00:07:34.900 04:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.159 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.418 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:35.418 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:35.418 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:35.418 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:35.418 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 479071 00:07:35.418 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 479071 00:07:35.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 479071 Killed "${NVMF_APP[@]}" "$@" 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=484053 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 484053 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 484053 ']' 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.678 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.678 [2024-12-10 04:44:26.624553] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:35.678 [2024-12-10 04:44:26.624601] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.678 [2024-12-10 04:44:26.701849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.678 [2024-12-10 04:44:26.741209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.678 [2024-12-10 04:44:26.741245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.678 [2024-12-10 04:44:26.741252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.678 [2024-12-10 04:44:26.741258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.678 [2024-12-10 04:44:26.741263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.678 [2024-12-10 04:44:26.741750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.937 04:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.937 [2024-12-10 04:44:27.040138] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:35.937 [2024-12-10 04:44:27.040245] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:35.937 [2024-12-10 04:44:27.040273] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:36.196 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ea56d19-07f7-4d0e-aae7-1d4b6be59291 -t 2000 00:07:36.456 [ 00:07:36.456 { 00:07:36.456 "name": "0ea56d19-07f7-4d0e-aae7-1d4b6be59291", 00:07:36.456 "aliases": [ 00:07:36.456 "lvs/lvol" 00:07:36.456 ], 00:07:36.456 "product_name": "Logical Volume", 00:07:36.456 "block_size": 4096, 00:07:36.456 "num_blocks": 38912, 00:07:36.456 "uuid": "0ea56d19-07f7-4d0e-aae7-1d4b6be59291", 00:07:36.456 "assigned_rate_limits": { 00:07:36.456 "rw_ios_per_sec": 0, 00:07:36.456 "rw_mbytes_per_sec": 0, 00:07:36.456 "r_mbytes_per_sec": 0, 00:07:36.456 "w_mbytes_per_sec": 0 00:07:36.456 }, 00:07:36.456 "claimed": false, 00:07:36.456 "zoned": false, 00:07:36.456 "supported_io_types": { 00:07:36.456 "read": true, 00:07:36.456 "write": true, 00:07:36.456 "unmap": true, 00:07:36.456 "flush": false, 00:07:36.456 "reset": true, 00:07:36.456 "nvme_admin": false, 00:07:36.456 "nvme_io": false, 00:07:36.456 "nvme_io_md": false, 00:07:36.456 "write_zeroes": true, 00:07:36.456 "zcopy": false, 00:07:36.456 "get_zone_info": false, 00:07:36.456 "zone_management": false, 00:07:36.456 "zone_append": false, 00:07:36.456 "compare": false, 00:07:36.456 "compare_and_write": false, 00:07:36.456 "abort": false, 00:07:36.456 "seek_hole": true, 00:07:36.456 "seek_data": true, 00:07:36.456 "copy": false, 00:07:36.456 "nvme_iov_md": false 00:07:36.456 }, 00:07:36.456 "driver_specific": { 00:07:36.456 "lvol": { 00:07:36.456 "lvol_store_uuid": "a1adce70-5cc4-41c2-8b72-32b5afb0b507", 00:07:36.456 "base_bdev": "aio_bdev", 00:07:36.456 "thin_provision": false, 00:07:36.456 "num_allocated_clusters": 38, 00:07:36.456 "snapshot": false, 00:07:36.456 "clone": false, 00:07:36.456 "esnap_clone": false 00:07:36.456 } 00:07:36.456 } 00:07:36.456 } 00:07:36.456 ] 00:07:36.456 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:36.456 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:36.456 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:36.716 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:36.716 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:36.716 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:36.716 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:36.716 04:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.976 [2024-12-10 04:44:28.017280] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:36.976 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:37.235 request: 00:07:37.235 { 00:07:37.235 "uuid": "a1adce70-5cc4-41c2-8b72-32b5afb0b507", 00:07:37.235 "method": "bdev_lvol_get_lvstores", 00:07:37.235 "req_id": 1 00:07:37.235 } 00:07:37.235 Got JSON-RPC error response 00:07:37.235 response: 00:07:37.235 { 00:07:37.235 "code": -19, 00:07:37.235 "message": "No such device" 00:07:37.235 } 00:07:37.235 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:37.235 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.235 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.235 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.235 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.495 aio_bdev 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.495 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ea56d19-07f7-4d0e-aae7-1d4b6be59291 -t 2000 00:07:37.754 [ 00:07:37.754 { 00:07:37.754 "name": "0ea56d19-07f7-4d0e-aae7-1d4b6be59291", 00:07:37.754 "aliases": [ 00:07:37.754 "lvs/lvol" 00:07:37.754 ], 00:07:37.754 "product_name": "Logical Volume", 00:07:37.754 "block_size": 4096, 00:07:37.754 "num_blocks": 38912, 00:07:37.754 "uuid": "0ea56d19-07f7-4d0e-aae7-1d4b6be59291", 00:07:37.754 "assigned_rate_limits": { 00:07:37.754 "rw_ios_per_sec": 0, 00:07:37.754 "rw_mbytes_per_sec": 0, 00:07:37.754 "r_mbytes_per_sec": 0, 00:07:37.754 "w_mbytes_per_sec": 0 00:07:37.754 }, 00:07:37.754 "claimed": false, 00:07:37.754 "zoned": false, 00:07:37.754 "supported_io_types": { 00:07:37.754 "read": true, 00:07:37.754 "write": true, 00:07:37.754 "unmap": true, 00:07:37.754 "flush": false, 00:07:37.754 "reset": true, 00:07:37.754 "nvme_admin": false, 00:07:37.754 "nvme_io": false, 00:07:37.754 "nvme_io_md": false, 00:07:37.754 "write_zeroes": true, 00:07:37.754 "zcopy": false, 00:07:37.754 "get_zone_info": false, 00:07:37.754 "zone_management": false, 00:07:37.754 "zone_append": false, 00:07:37.754 "compare": false, 00:07:37.754 "compare_and_write": false, 00:07:37.754 "abort": false, 00:07:37.754 "seek_hole": true, 00:07:37.754 "seek_data": true, 00:07:37.754 "copy": false, 00:07:37.754 "nvme_iov_md": false 00:07:37.754 }, 00:07:37.754 "driver_specific": { 00:07:37.754 "lvol": { 00:07:37.754 "lvol_store_uuid": "a1adce70-5cc4-41c2-8b72-32b5afb0b507", 00:07:37.754 "base_bdev": "aio_bdev", 00:07:37.754 "thin_provision": false, 00:07:37.754 "num_allocated_clusters": 38, 00:07:37.754 "snapshot": false, 00:07:37.754 "clone": false, 00:07:37.754 "esnap_clone": false 00:07:37.754 } 00:07:37.754 } 00:07:37.754 } 00:07:37.754 ] 00:07:37.754 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:37.754 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:37.754 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.013 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.013 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:38.013 04:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:38.272 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:38.272 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ea56d19-07f7-4d0e-aae7-1d4b6be59291 00:07:38.272 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1adce70-5cc4-41c2-8b72-32b5afb0b507 00:07:38.531 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.790 00:07:38.790 real 0m16.807s 00:07:38.790 user 0m43.329s 00:07:38.790 sys 0m3.849s 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.790 ************************************ 00:07:38.790 END TEST lvs_grow_dirty 00:07:38.790 ************************************ 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:38.790 nvmf_trace.0 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.790 rmmod nvme_tcp 00:07:38.790 rmmod nvme_fabrics 00:07:38.790 rmmod nvme_keyring 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 484053 ']' 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 484053 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 484053 ']' 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 484053 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.790 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484053 00:07:39.049 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.049 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.049 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484053' 00:07:39.049 killing process with pid 484053 00:07:39.049 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 484053 00:07:39.049 04:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 484053 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.049 04:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.588 00:07:41.588 real 0m41.620s 00:07:41.588 user 1m4.085s 00:07:41.588 sys 0m10.163s 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.588 ************************************ 00:07:41.588 END TEST nvmf_lvs_grow 00:07:41.588 ************************************ 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.588 ************************************ 00:07:41.588 START TEST nvmf_bdev_io_wait 00:07:41.588 ************************************ 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.588 * Looking for test storage... 00:07:41.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.588 --rc genhtml_branch_coverage=1 00:07:41.588 --rc genhtml_function_coverage=1 00:07:41.588 --rc genhtml_legend=1 00:07:41.588 --rc geninfo_all_blocks=1 00:07:41.588 --rc geninfo_unexecuted_blocks=1 00:07:41.588 00:07:41.588 ' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.588 --rc genhtml_branch_coverage=1 00:07:41.588 --rc genhtml_function_coverage=1 00:07:41.588 --rc genhtml_legend=1 00:07:41.588 --rc geninfo_all_blocks=1 00:07:41.588 --rc geninfo_unexecuted_blocks=1 00:07:41.588 00:07:41.588 ' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.588 --rc genhtml_branch_coverage=1 00:07:41.588 --rc genhtml_function_coverage=1 00:07:41.588 --rc genhtml_legend=1 00:07:41.588 --rc geninfo_all_blocks=1 00:07:41.588 --rc geninfo_unexecuted_blocks=1 00:07:41.588 00:07:41.588 ' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.588 --rc genhtml_branch_coverage=1 00:07:41.588 --rc genhtml_function_coverage=1 00:07:41.588 --rc genhtml_legend=1 00:07:41.588 --rc geninfo_all_blocks=1 00:07:41.588 --rc geninfo_unexecuted_blocks=1 00:07:41.588 00:07:41.588 ' 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.588 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.589 04:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:48.163 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:48.163 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:48.163 Found net devices under 0000:af:00.0: cvl_0_0 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:48.163 Found net devices under 0000:af:00.1: cvl_0_1 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.163 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:07:48.164 00:07:48.164 --- 10.0.0.2 ping statistics --- 00:07:48.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.164 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:07:48.164 00:07:48.164 --- 10.0.0.1 ping statistics --- 00:07:48.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.164 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=488122 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 488122 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 488122 ']' 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.164 04:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.164 [2024-12-10 04:44:38.506265] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:48.164 [2024-12-10 04:44:38.506313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.164 [2024-12-10 04:44:38.584297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.164 [2024-12-10 04:44:38.624739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.164 [2024-12-10 04:44:38.624778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.164 [2024-12-10 04:44:38.624785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.164 [2024-12-10 04:44:38.624792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.164 [2024-12-10 04:44:38.624797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.164 [2024-12-10 04:44:38.626268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.164 [2024-12-10 04:44:38.626379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.164 [2024-12-10 04:44:38.626464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.164 [2024-12-10 04:44:38.626465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 [2024-12-10 04:44:39.443976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 Malloc0 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.425 [2024-12-10 04:44:39.499214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=488367 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=488369 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.425 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.425 { 00:07:48.425 "params": { 00:07:48.425 "name": "Nvme$subsystem", 00:07:48.425 "trtype": "$TEST_TRANSPORT", 00:07:48.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.425 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "$NVMF_PORT", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.426 "hdgst": ${hdgst:-false}, 00:07:48.426 "ddgst": ${ddgst:-false} 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 } 00:07:48.426 EOF 00:07:48.426 )") 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=488371 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.426 { 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme$subsystem", 00:07:48.426 "trtype": "$TEST_TRANSPORT", 00:07:48.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "$NVMF_PORT", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.426 "hdgst": ${hdgst:-false}, 00:07:48.426 "ddgst": ${ddgst:-false} 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 } 00:07:48.426 EOF 00:07:48.426 )") 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=488374 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.426 { 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme$subsystem", 00:07:48.426 "trtype": "$TEST_TRANSPORT", 00:07:48.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "$NVMF_PORT", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.426 "hdgst": ${hdgst:-false}, 00:07:48.426 "ddgst": ${ddgst:-false} 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 } 00:07:48.426 EOF 00:07:48.426 )") 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.426 { 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme$subsystem", 00:07:48.426 "trtype": "$TEST_TRANSPORT", 00:07:48.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "$NVMF_PORT", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.426 "hdgst": ${hdgst:-false}, 00:07:48.426 "ddgst": ${ddgst:-false} 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 } 00:07:48.426 EOF 00:07:48.426 )") 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 488367 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme1", 00:07:48.426 "trtype": "tcp", 00:07:48.426 "traddr": "10.0.0.2", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "4420", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.426 "hdgst": false, 00:07:48.426 "ddgst": false 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 }' 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme1", 00:07:48.426 "trtype": "tcp", 00:07:48.426 "traddr": "10.0.0.2", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "4420", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.426 "hdgst": false, 00:07:48.426 "ddgst": false 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 }' 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme1", 00:07:48.426 "trtype": "tcp", 00:07:48.426 "traddr": "10.0.0.2", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "4420", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.426 "hdgst": false, 00:07:48.426 "ddgst": false 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 }' 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.426 04:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.426 "params": { 00:07:48.426 "name": "Nvme1", 00:07:48.426 "trtype": "tcp", 00:07:48.426 "traddr": "10.0.0.2", 00:07:48.426 "adrfam": "ipv4", 00:07:48.426 "trsvcid": "4420", 00:07:48.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.426 "hdgst": false, 00:07:48.426 "ddgst": false 00:07:48.426 }, 00:07:48.426 "method": "bdev_nvme_attach_controller" 00:07:48.426 }' 00:07:48.426 [2024-12-10 04:44:39.550729] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:48.426 [2024-12-10 04:44:39.550730] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:48.426 [2024-12-10 04:44:39.550781] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 04:44:39.550782] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:48.426 --proc-type=auto ] 00:07:48.426 [2024-12-10 04:44:39.551534] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:48.426 [2024-12-10 04:44:39.551571] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:48.426 [2024-12-10 04:44:39.552871] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:48.426 [2024-12-10 04:44:39.552917] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:48.686 [2024-12-10 04:44:39.745104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.686 [2024-12-10 04:44:39.790135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:48.945 [2024-12-10 04:44:39.837689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.945 [2024-12-10 04:44:39.883183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:48.945 [2024-12-10 04:44:39.897821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.945 [2024-12-10 04:44:39.936200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:48.945 [2024-12-10 04:44:39.994833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.945 [2024-12-10 04:44:40.058305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:49.204 Running I/O for 1 seconds... 00:07:49.204 Running I/O for 1 seconds... 00:07:49.204 Running I/O for 1 seconds... 00:07:49.204 Running I/O for 1 seconds... 00:07:50.141 242568.00 IOPS, 947.53 MiB/s 00:07:50.141 Latency(us) 00:07:50.141 [2024-12-10T03:44:41.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.141 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:50.141 Nvme1n1 : 1.00 242203.73 946.11 0.00 0.00 525.85 220.40 1490.16 00:07:50.141 [2024-12-10T03:44:41.278Z] =================================================================================================================== 00:07:50.141 [2024-12-10T03:44:41.278Z] Total : 242203.73 946.11 0.00 0.00 525.85 220.40 1490.16 00:07:50.141 11544.00 IOPS, 45.09 MiB/s 00:07:50.141 Latency(us) 00:07:50.141 [2024-12-10T03:44:41.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.141 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:50.141 Nvme1n1 : 1.01 11589.01 45.27 0.00 0.00 11003.18 6303.94 14542.75 00:07:50.141 [2024-12-10T03:44:41.278Z] =================================================================================================================== 00:07:50.141 [2024-12-10T03:44:41.278Z] Total : 11589.01 45.27 0.00 0.00 11003.18 6303.94 14542.75 00:07:50.141 10983.00 IOPS, 42.90 MiB/s 00:07:50.141 Latency(us) 00:07:50.141 [2024-12-10T03:44:41.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.141 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:50.141 Nvme1n1 : 1.01 11051.79 43.17 0.00 0.00 11546.32 4462.69 18849.40 00:07:50.141 [2024-12-10T03:44:41.278Z] =================================================================================================================== 00:07:50.141 [2024-12-10T03:44:41.278Z] Total : 11051.79 43.17 0.00 0.00 11546.32 4462.69 18849.40 00:07:50.141 9703.00 IOPS, 37.90 MiB/s 00:07:50.141 Latency(us) 00:07:50.141 [2024-12-10T03:44:41.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.141 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:50.141 Nvme1n1 : 1.01 9785.15 38.22 0.00 0.00 13043.18 4025.78 24466.77 00:07:50.141 [2024-12-10T03:44:41.278Z] =================================================================================================================== 00:07:50.141 [2024-12-10T03:44:41.278Z] Total : 9785.15 38.22 0.00 0.00 13043.18 4025.78 24466.77 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 488369 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 488371 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 488374 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.401 rmmod nvme_tcp 00:07:50.401 rmmod nvme_fabrics 00:07:50.401 rmmod nvme_keyring 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 488122 ']' 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 488122 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 488122 ']' 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 488122 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488122 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488122' 00:07:50.401 killing process with pid 488122 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 488122 00:07:50.401 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 488122 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.661 04:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.568 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.568 00:07:52.568 real 0m11.418s 00:07:52.568 user 0m18.693s 00:07:52.568 sys 0m6.246s 00:07:52.568 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.568 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.568 ************************************ 00:07:52.568 END TEST nvmf_bdev_io_wait 00:07:52.568 ************************************ 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.828 ************************************ 00:07:52.828 START TEST nvmf_queue_depth 00:07:52.828 ************************************ 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:52.828 * Looking for test storage... 00:07:52.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:52.828 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.829 --rc genhtml_branch_coverage=1 00:07:52.829 --rc genhtml_function_coverage=1 00:07:52.829 --rc genhtml_legend=1 00:07:52.829 --rc geninfo_all_blocks=1 00:07:52.829 --rc geninfo_unexecuted_blocks=1 00:07:52.829 00:07:52.829 ' 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.829 --rc genhtml_branch_coverage=1 00:07:52.829 --rc genhtml_function_coverage=1 00:07:52.829 --rc genhtml_legend=1 00:07:52.829 --rc geninfo_all_blocks=1 00:07:52.829 --rc geninfo_unexecuted_blocks=1 00:07:52.829 00:07:52.829 ' 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.829 --rc genhtml_branch_coverage=1 00:07:52.829 --rc genhtml_function_coverage=1 00:07:52.829 --rc genhtml_legend=1 00:07:52.829 --rc geninfo_all_blocks=1 00:07:52.829 --rc geninfo_unexecuted_blocks=1 00:07:52.829 00:07:52.829 ' 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.829 --rc genhtml_branch_coverage=1 00:07:52.829 --rc genhtml_function_coverage=1 00:07:52.829 --rc genhtml_legend=1 00:07:52.829 --rc geninfo_all_blocks=1 00:07:52.829 --rc geninfo_unexecuted_blocks=1 00:07:52.829 00:07:52.829 ' 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.829 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.089 04:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.661 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:59.662 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:59.662 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:59.662 Found net devices under 0000:af:00.0: cvl_0_0 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:59.662 Found net devices under 0000:af:00.1: cvl_0_1 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:07:59.662 00:07:59.662 --- 10.0.0.2 ping statistics --- 00:07:59.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.662 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:07:59.662 00:07:59.662 --- 10.0.0.1 ping statistics --- 00:07:59.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.662 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.662 04:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=492246 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 492246 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 492246 ']' 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.662 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 [2024-12-10 04:44:50.056388] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:59.663 [2024-12-10 04:44:50.056436] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.663 [2024-12-10 04:44:50.140310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.663 [2024-12-10 04:44:50.178717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.663 [2024-12-10 04:44:50.178756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.663 [2024-12-10 04:44:50.178765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.663 [2024-12-10 04:44:50.178771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.663 [2024-12-10 04:44:50.178776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.663 [2024-12-10 04:44:50.179267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 [2024-12-10 04:44:50.323236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 Malloc0 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 [2024-12-10 04:44:50.373467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=492322 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 492322 /var/tmp/bdevperf.sock 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 492322 ']' 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 [2024-12-10 04:44:50.424763] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:59.663 [2024-12-10 04:44:50.424805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492322 ] 00:07:59.663 [2024-12-10 04:44:50.498047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.663 [2024-12-10 04:44:50.537599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 NVMe0n1 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 04:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.922 Running I/O for 10 seconds... 00:08:01.928 12003.00 IOPS, 46.89 MiB/s [2024-12-10T03:44:54.001Z] 12093.50 IOPS, 47.24 MiB/s [2024-12-10T03:44:54.938Z] 12272.00 IOPS, 47.94 MiB/s [2024-12-10T03:44:55.874Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-10T03:44:57.250Z] 12290.40 IOPS, 48.01 MiB/s [2024-12-10T03:44:58.186Z] 12371.50 IOPS, 48.33 MiB/s [2024-12-10T03:44:59.123Z] 12418.43 IOPS, 48.51 MiB/s [2024-12-10T03:45:00.059Z] 12415.25 IOPS, 48.50 MiB/s [2024-12-10T03:45:00.995Z] 12478.56 IOPS, 48.74 MiB/s [2024-12-10T03:45:00.995Z] 12476.80 IOPS, 48.74 MiB/s 00:08:09.858 Latency(us) 00:08:09.858 [2024-12-10T03:45:00.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.858 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:09.858 Verification LBA range: start 0x0 length 0x4000 00:08:09.858 NVMe0n1 : 10.07 12491.55 48.80 0.00 0.00 81698.88 18724.57 52428.80 00:08:09.858 [2024-12-10T03:45:00.995Z] =================================================================================================================== 00:08:09.858 [2024-12-10T03:45:00.995Z] Total : 12491.55 48.80 0.00 0.00 81698.88 18724.57 52428.80 00:08:09.858 { 00:08:09.858 "results": [ 00:08:09.858 { 00:08:09.858 "job": "NVMe0n1", 00:08:09.858 "core_mask": "0x1", 00:08:09.858 "workload": "verify", 00:08:09.858 "status": "finished", 00:08:09.858 "verify_range": { 00:08:09.858 "start": 0, 00:08:09.858 "length": 16384 00:08:09.858 }, 00:08:09.858 "queue_depth": 1024, 00:08:09.858 "io_size": 4096, 00:08:09.858 "runtime": 10.07017, 00:08:09.858 "iops": 12491.546815992182, 00:08:09.858 "mibps": 48.79510474996946, 00:08:09.858 "io_failed": 0, 00:08:09.858 "io_timeout": 0, 00:08:09.858 "avg_latency_us": 81698.88070162687, 00:08:09.858 "min_latency_us": 18724.571428571428, 00:08:09.858 "max_latency_us": 52428.8 00:08:09.858 } 00:08:09.858 ], 00:08:09.858 "core_count": 1 00:08:09.858 } 00:08:09.858 04:45:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 492322 00:08:09.858 04:45:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 492322 ']' 00:08:09.858 04:45:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 492322 00:08:09.858 04:45:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:09.858 04:45:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.858 04:45:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492322 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492322' 00:08:10.118 killing process with pid 492322 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 492322 00:08:10.118 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.118 00:08:10.118 Latency(us) 00:08:10.118 [2024-12-10T03:45:01.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.118 [2024-12-10T03:45:01.255Z] =================================================================================================================== 00:08:10.118 [2024-12-10T03:45:01.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 492322 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.118 rmmod nvme_tcp 00:08:10.118 rmmod nvme_fabrics 00:08:10.118 rmmod nvme_keyring 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 492246 ']' 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 492246 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 492246 ']' 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 492246 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.118 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492246 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492246' 00:08:10.377 killing process with pid 492246 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 492246 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 492246 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.377 04:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.912 00:08:12.912 real 0m19.775s 00:08:12.912 user 0m23.074s 00:08:12.912 sys 0m6.020s 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.912 ************************************ 00:08:12.912 END TEST nvmf_queue_depth 00:08:12.912 ************************************ 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.912 ************************************ 00:08:12.912 START TEST nvmf_target_multipath 00:08:12.912 ************************************ 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:12.912 * Looking for test storage... 00:08:12.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:12.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.912 --rc genhtml_branch_coverage=1 00:08:12.912 --rc genhtml_function_coverage=1 00:08:12.912 --rc genhtml_legend=1 00:08:12.912 --rc geninfo_all_blocks=1 00:08:12.912 --rc geninfo_unexecuted_blocks=1 00:08:12.912 00:08:12.912 ' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:12.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.912 --rc genhtml_branch_coverage=1 00:08:12.912 --rc genhtml_function_coverage=1 00:08:12.912 --rc genhtml_legend=1 00:08:12.912 --rc geninfo_all_blocks=1 00:08:12.912 --rc geninfo_unexecuted_blocks=1 00:08:12.912 00:08:12.912 ' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:12.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.912 --rc genhtml_branch_coverage=1 00:08:12.912 --rc genhtml_function_coverage=1 00:08:12.912 --rc genhtml_legend=1 00:08:12.912 --rc geninfo_all_blocks=1 00:08:12.912 --rc geninfo_unexecuted_blocks=1 00:08:12.912 00:08:12.912 ' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:12.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.912 --rc genhtml_branch_coverage=1 00:08:12.912 --rc genhtml_function_coverage=1 00:08:12.912 --rc genhtml_legend=1 00:08:12.912 --rc geninfo_all_blocks=1 00:08:12.912 --rc geninfo_unexecuted_blocks=1 00:08:12.912 00:08:12.912 ' 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.912 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.913 04:45:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:19.484 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:19.485 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:19.485 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:19.485 Found net devices under 0000:af:00.0: cvl_0_0 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:19.485 Found net devices under 0000:af:00.1: cvl_0_1 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:08:19.485 00:08:19.485 --- 10.0.0.2 ping statistics --- 00:08:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.485 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:08:19.485 00:08:19.485 --- 10.0.0.1 ping statistics --- 00:08:19.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.485 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:19.485 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:19.485 only one NIC for nvmf test 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.486 rmmod nvme_tcp 00:08:19.486 rmmod nvme_fabrics 00:08:19.486 rmmod nvme_keyring 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.486 04:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.862 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.120 04:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.120 00:08:21.120 real 0m8.397s 00:08:21.120 user 0m1.877s 00:08:21.120 sys 0m4.469s 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:21.120 ************************************ 00:08:21.120 END TEST nvmf_target_multipath 00:08:21.120 ************************************ 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.120 ************************************ 00:08:21.120 START TEST nvmf_zcopy 00:08:21.120 ************************************ 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:21.120 * Looking for test storage... 00:08:21.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.120 --rc genhtml_branch_coverage=1 00:08:21.120 --rc genhtml_function_coverage=1 00:08:21.120 --rc genhtml_legend=1 00:08:21.120 --rc geninfo_all_blocks=1 00:08:21.120 --rc geninfo_unexecuted_blocks=1 00:08:21.120 00:08:21.120 ' 00:08:21.120 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.120 --rc genhtml_branch_coverage=1 00:08:21.120 --rc genhtml_function_coverage=1 00:08:21.120 --rc genhtml_legend=1 00:08:21.121 --rc geninfo_all_blocks=1 00:08:21.121 --rc geninfo_unexecuted_blocks=1 00:08:21.121 00:08:21.121 ' 00:08:21.121 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.121 --rc genhtml_branch_coverage=1 00:08:21.121 --rc genhtml_function_coverage=1 00:08:21.121 --rc genhtml_legend=1 00:08:21.121 --rc geninfo_all_blocks=1 00:08:21.121 --rc geninfo_unexecuted_blocks=1 00:08:21.121 00:08:21.121 ' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.380 --rc genhtml_branch_coverage=1 00:08:21.380 --rc genhtml_function_coverage=1 00:08:21.380 --rc genhtml_legend=1 00:08:21.380 --rc geninfo_all_blocks=1 00:08:21.380 --rc geninfo_unexecuted_blocks=1 00:08:21.380 00:08:21.380 ' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.380 04:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:27.948 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.948 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:27.949 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:27.949 Found net devices under 0000:af:00.0: cvl_0_0 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:27.949 Found net devices under 0000:af:00.1: cvl_0_1 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.949 04:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:08:27.949 00:08:27.949 --- 10.0.0.2 ping statistics --- 00:08:27.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.949 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:27.949 00:08:27.949 --- 10.0.0.1 ping statistics --- 00:08:27.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.949 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=501570 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 501570 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 501570 ']' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 [2024-12-10 04:45:18.273980] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:27.949 [2024-12-10 04:45:18.274029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.949 [2024-12-10 04:45:18.351952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.949 [2024-12-10 04:45:18.391115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.949 [2024-12-10 04:45:18.391150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.949 [2024-12-10 04:45:18.391156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.949 [2024-12-10 04:45:18.391162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.949 [2024-12-10 04:45:18.391171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.949 [2024-12-10 04:45:18.391651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 [2024-12-10 04:45:18.527152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.949 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 [2024-12-10 04:45:18.547340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 malloc0 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.950 { 00:08:27.950 "params": { 00:08:27.950 "name": "Nvme$subsystem", 00:08:27.950 "trtype": "$TEST_TRANSPORT", 00:08:27.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.950 "adrfam": "ipv4", 00:08:27.950 "trsvcid": "$NVMF_PORT", 00:08:27.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.950 "hdgst": ${hdgst:-false}, 00:08:27.950 "ddgst": ${ddgst:-false} 00:08:27.950 }, 00:08:27.950 "method": "bdev_nvme_attach_controller" 00:08:27.950 } 00:08:27.950 EOF 00:08:27.950 )") 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:27.950 04:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.950 "params": { 00:08:27.950 "name": "Nvme1", 00:08:27.950 "trtype": "tcp", 00:08:27.950 "traddr": "10.0.0.2", 00:08:27.950 "adrfam": "ipv4", 00:08:27.950 "trsvcid": "4420", 00:08:27.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.950 "hdgst": false, 00:08:27.950 "ddgst": false 00:08:27.950 }, 00:08:27.950 "method": "bdev_nvme_attach_controller" 00:08:27.950 }' 00:08:27.950 [2024-12-10 04:45:18.629344] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:27.950 [2024-12-10 04:45:18.629391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501621 ] 00:08:27.950 [2024-12-10 04:45:18.705741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.950 [2024-12-10 04:45:18.747112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.950 Running I/O for 10 seconds... 00:08:30.263 8741.00 IOPS, 68.29 MiB/s [2024-12-10T03:45:22.336Z] 8682.00 IOPS, 67.83 MiB/s [2024-12-10T03:45:23.272Z] 8735.33 IOPS, 68.24 MiB/s [2024-12-10T03:45:24.208Z] 8761.25 IOPS, 68.45 MiB/s [2024-12-10T03:45:25.144Z] 8787.00 IOPS, 68.65 MiB/s [2024-12-10T03:45:26.079Z] 8801.67 IOPS, 68.76 MiB/s [2024-12-10T03:45:27.456Z] 8811.14 IOPS, 68.84 MiB/s [2024-12-10T03:45:28.023Z] 8822.88 IOPS, 68.93 MiB/s [2024-12-10T03:45:29.401Z] 8830.22 IOPS, 68.99 MiB/s [2024-12-10T03:45:29.401Z] 8832.80 IOPS, 69.01 MiB/s 00:08:38.264 Latency(us) 00:08:38.264 [2024-12-10T03:45:29.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:38.264 Verification LBA range: start 0x0 length 0x1000 00:08:38.264 Nvme1n1 : 10.01 8834.81 69.02 0.00 0.00 14446.96 1685.21 21470.84 00:08:38.264 [2024-12-10T03:45:29.401Z] =================================================================================================================== 00:08:38.264 [2024-12-10T03:45:29.401Z] Total : 8834.81 69.02 0.00 0.00 14446.96 1685.21 21470.84 00:08:38.264 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=503384 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.265 { 00:08:38.265 "params": { 00:08:38.265 "name": "Nvme$subsystem", 00:08:38.265 "trtype": "$TEST_TRANSPORT", 00:08:38.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.265 "adrfam": "ipv4", 00:08:38.265 "trsvcid": "$NVMF_PORT", 00:08:38.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.265 "hdgst": ${hdgst:-false}, 00:08:38.265 "ddgst": ${ddgst:-false} 00:08:38.265 }, 00:08:38.265 "method": "bdev_nvme_attach_controller" 00:08:38.265 } 00:08:38.265 EOF 00:08:38.265 )") 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:38.265 [2024-12-10 04:45:29.189292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.189323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:38.265 04:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.265 "params": { 00:08:38.265 "name": "Nvme1", 00:08:38.265 "trtype": "tcp", 00:08:38.265 "traddr": "10.0.0.2", 00:08:38.265 "adrfam": "ipv4", 00:08:38.265 "trsvcid": "4420", 00:08:38.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:38.265 "hdgst": false, 00:08:38.265 "ddgst": false 00:08:38.265 }, 00:08:38.265 "method": "bdev_nvme_attach_controller" 00:08:38.265 }' 00:08:38.265 [2024-12-10 04:45:29.201294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.201307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.213324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.213333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.225355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.225365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.229974] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:38.265 [2024-12-10 04:45:29.230014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503384 ] 00:08:38.265 [2024-12-10 04:45:29.237388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.237398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.249417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.249426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.261452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.261461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.273482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.273492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.285514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.285523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.297546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.297554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.304264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.265 [2024-12-10 04:45:29.309587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.309601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.321615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.321627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.333644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.333653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.344195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.265 [2024-12-10 04:45:29.345683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.345694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.357725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.357739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.369753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.369772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.381786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.381799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.265 [2024-12-10 04:45:29.393823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.265 [2024-12-10 04:45:29.393834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-12-10 04:45:29.405845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-12-10 04:45:29.405860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-12-10 04:45:29.417875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.417890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.430133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.430152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.442163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.442184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.454188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.454202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.466220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.466236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.478246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.478259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.490278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.490287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.502310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.502320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.514345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.514359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.526374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.526385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.538407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.538416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.550439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.550448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.562477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.562489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.574509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.574519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.586541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.586551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.598573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.598585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.610636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.610651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.622656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.622674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 Running I/O for 5 seconds... 00:08:38.525 [2024-12-10 04:45:29.639121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.639141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.525 [2024-12-10 04:45:29.654592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.525 [2024-12-10 04:45:29.654612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.668355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.668376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.682121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.682141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.696182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.696201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.710075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.710096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.724046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.724065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.738006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.738025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.751367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.751387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.765242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.765260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.779085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.779104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.792356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.792374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.806122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.806140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.819847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.819866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.833268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.833287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.846988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.847006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.860709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.860728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.874649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.874668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.888328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.888347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.901919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.901937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.784 [2024-12-10 04:45:29.915711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.784 [2024-12-10 04:45:29.915730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.043 [2024-12-10 04:45:29.929552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.043 [2024-12-10 04:45:29.929571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:29.943385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:29.943404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:29.957359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:29.957378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:29.970988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:29.971006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:29.984567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:29.984586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:29.998503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:29.998522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.011044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.011067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.026913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.026938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.042450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.042469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.055936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.055956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.069642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.069664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.085039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.085061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.101356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.101376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.111839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.111858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.126062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.126082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.140151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.140179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.153753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.153773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.044 [2024-12-10 04:45:30.167502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.044 [2024-12-10 04:45:30.167527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.181753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.181772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.195361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.195380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.209233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.209253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.223371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.223391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.237212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.237235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.251075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.251094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.264727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.264746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.278887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.278907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.293117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.293137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.306877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.306897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.320902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.320922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.334401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.334420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.348250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.348269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.361912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.361932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.375914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.375947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.390226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.390246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.401183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.401202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.415292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.415311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.303 [2024-12-10 04:45:30.429573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.303 [2024-12-10 04:45:30.429600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.443091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.443110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.456816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.456837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.470431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.470450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.484682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.484701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.495811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.495830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.510311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.510330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.523974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.523993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.537420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.537439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.550985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.551004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.564942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.564961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.578611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.578630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.592548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.592567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.606275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.606294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.562 [2024-12-10 04:45:30.619880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.562 [2024-12-10 04:45:30.619898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.563 16688.00 IOPS, 130.38 MiB/s [2024-12-10T03:45:30.700Z] [2024-12-10 04:45:30.633358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.563 [2024-12-10 04:45:30.633376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.563 [2024-12-10 04:45:30.647045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.563 [2024-12-10 04:45:30.647064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.563 [2024-12-10 04:45:30.660648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.563 [2024-12-10 04:45:30.660667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.563 [2024-12-10 04:45:30.673821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.563 [2024-12-10 04:45:30.673840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.563 [2024-12-10 04:45:30.687963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.563 [2024-12-10 04:45:30.687986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.820 [2024-12-10 04:45:30.698741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.820 [2024-12-10 04:45:30.698759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.820 [2024-12-10 04:45:30.712389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.820 [2024-12-10 04:45:30.712408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.820 [2024-12-10 04:45:30.725602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.820 [2024-12-10 04:45:30.725621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.820 [2024-12-10 04:45:30.739541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.820 [2024-12-10 04:45:30.739559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.753969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.753987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.765550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.765569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.779461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.779479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.793288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.793308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.807423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.807442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.820713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.820732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.834714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.834735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.848194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.848212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.861825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.861844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.875189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.875207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.889223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.889242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.902739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.902757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.916335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.916353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.930274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.930293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.821 [2024-12-10 04:45:30.943757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.821 [2024-12-10 04:45:30.943781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:30.957363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:30.957383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:30.970684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:30.970704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:30.984161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:30.984187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:30.998259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:30.998279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.011871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.011890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.025690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.025709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.039046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.039065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.052865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.052883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.066606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.066626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.080113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.080132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.093779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.093798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.107370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.107388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.121081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.121099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.134390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.134409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.147904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.147922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.161458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.161476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.175419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.175436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.189092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.189111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.079 [2024-12-10 04:45:31.202663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.079 [2024-12-10 04:45:31.202682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.216615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.216634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.230234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.230252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.243792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.243810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.257321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.257339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.271062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.271081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.285204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.285223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.298567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.298585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.312584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.312603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.323962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.323981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.337526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.337545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.350658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.350677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.364150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.364174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.377756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.338 [2024-12-10 04:45:31.377774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.338 [2024-12-10 04:45:31.391462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.339 [2024-12-10 04:45:31.391479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.339 [2024-12-10 04:45:31.404758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.339 [2024-12-10 04:45:31.404776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.339 [2024-12-10 04:45:31.418261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.339 [2024-12-10 04:45:31.418280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.339 [2024-12-10 04:45:31.431725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.339 [2024-12-10 04:45:31.431744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.339 [2024-12-10 04:45:31.445219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.339 [2024-12-10 04:45:31.445237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.339 [2024-12-10 04:45:31.458538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.339 [2024-12-10 04:45:31.458557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.472402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.472421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.486030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.486049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.499740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.499758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.513871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.513889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.527475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.527493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.540857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.540874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.554651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.554670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.568593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.568611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.582294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.582312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.596158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.596181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.609923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.609940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.623409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.623426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 16900.00 IOPS, 132.03 MiB/s [2024-12-10T03:45:31.735Z] [2024-12-10 04:45:31.636664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.636687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.650775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.650794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.664574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.664595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.678430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.678449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.691514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.691532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.705481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.705504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.598 [2024-12-10 04:45:31.719374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.598 [2024-12-10 04:45:31.719392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.733263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.733281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.747021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.747040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.761037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.761056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.774676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.774695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.788591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.788610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.802476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.802497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.815947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.815965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.829398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.829416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.843221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.843239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.856600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.856618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.870212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.870231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.883755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.883774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.897583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.897601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.910730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.910749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.924224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.924242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.937501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.937519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.951554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.951572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.965054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.965078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.857 [2024-12-10 04:45:31.978674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.857 [2024-12-10 04:45:31.978693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:31.992251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:31.992269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.005974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.005992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.019926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.019944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.033860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.033878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.048071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.048089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.061943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.061961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.075779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.075797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.089408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.089425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.103074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.103092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.116989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.117008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.130911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.130930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.144171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.144192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.158296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.158315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.172087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.172106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.186017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.186035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.199558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.199578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.212961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.212980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.226746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.226770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.117 [2024-12-10 04:45:32.240310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.117 [2024-12-10 04:45:32.240329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.375 [2024-12-10 04:45:32.254221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.375 [2024-12-10 04:45:32.254240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.375 [2024-12-10 04:45:32.267696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.267715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.281588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.281608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.295914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.295934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.306695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.306714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.320599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.320619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.334293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.334312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.348014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.348033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.361697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.361715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.375225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.375243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.389129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.389148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.402493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.402511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.416090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.416108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.429875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.429894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.443753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.443772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.457524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.457544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.471750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.471769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.482665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.482688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.376 [2024-12-10 04:45:32.497011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.376 [2024-12-10 04:45:32.497029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.510452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.510471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.524327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.524345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.537816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.537834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.551180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.551199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.564667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.564686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.578480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.578498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.592542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.592561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.606133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.606151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.619887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.619905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.633448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.633467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 16971.33 IOPS, 132.59 MiB/s [2024-12-10T03:45:32.772Z] [2024-12-10 04:45:32.647111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.647129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.660825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.660843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.674684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.674702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.688236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.688256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.701853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.701873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.715848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.715867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.729293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.729312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.743257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.743276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.635 [2024-12-10 04:45:32.756813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.635 [2024-12-10 04:45:32.756832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.770489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.770508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.784198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.784216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.797558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.797576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.811339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.811358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.825331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.825350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.838758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.838776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.852242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.852261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.866282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.866300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.879672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.879690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.893721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.893739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.907248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.907267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.920706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.920725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.894 [2024-12-10 04:45:32.934666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.894 [2024-12-10 04:45:32.934684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.895 [2024-12-10 04:45:32.948489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.895 [2024-12-10 04:45:32.948507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.895 [2024-12-10 04:45:32.961993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.895 [2024-12-10 04:45:32.962012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.895 [2024-12-10 04:45:32.975421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.895 [2024-12-10 04:45:32.975438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.895 [2024-12-10 04:45:32.989435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.895 [2024-12-10 04:45:32.989453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.895 [2024-12-10 04:45:33.003312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.895 [2024-12-10 04:45:33.003331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.895 [2024-12-10 04:45:33.016887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.895 [2024-12-10 04:45:33.016905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.030784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.030803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.044059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.044078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.057686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.057704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.071474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.071493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.085530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.085548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.099914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.099932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.114923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.114942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.129062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.129080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.142487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.142505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.156086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.156104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.169403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.169422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.182941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.182959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.196654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.196673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.210432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.210451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.224176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.224194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.238278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.238305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.251625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.251644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.264863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.264882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.154 [2024-12-10 04:45:33.278647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.154 [2024-12-10 04:45:33.278664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.292327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.292345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.306060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.306078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.319557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.319575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.333555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.333574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.347125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.347143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.360652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.360670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.374127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.374145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.387463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.387482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.401056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.401074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.414171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.414190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.427799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.427818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.441448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.441467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.455097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.413 [2024-12-10 04:45:33.455115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.413 [2024-12-10 04:45:33.468558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.414 [2024-12-10 04:45:33.468576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.414 [2024-12-10 04:45:33.482030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.414 [2024-12-10 04:45:33.482054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.414 [2024-12-10 04:45:33.495401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.414 [2024-12-10 04:45:33.495419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.414 [2024-12-10 04:45:33.509173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.414 [2024-12-10 04:45:33.509198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.414 [2024-12-10 04:45:33.522900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.414 [2024-12-10 04:45:33.522920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.414 [2024-12-10 04:45:33.536652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.414 [2024-12-10 04:45:33.536674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.672 [2024-12-10 04:45:33.550451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.672 [2024-12-10 04:45:33.550470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.672 [2024-12-10 04:45:33.563629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.563648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.577762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.577783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.591764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.591784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.605296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.605315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.619027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.619048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.632359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.632378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 17007.00 IOPS, 132.87 MiB/s [2024-12-10T03:45:33.810Z] [2024-12-10 04:45:33.645676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.645694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.659409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.659428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.672785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.672803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.686581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.686600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.700203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.700222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.713839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.713857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.727485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.727504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.740874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.740893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.754378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.754397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.768197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.768221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.781974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.781993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.673 [2024-12-10 04:45:33.795745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.673 [2024-12-10 04:45:33.795763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.809428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.809449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.823469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.823487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.837005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.837024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.850750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.850769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.864520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.864539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.878140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.878158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.892083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.892101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.905549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.905567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.918764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.918782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.932496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.932515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.946312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.946331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.960162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.960186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.973523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.973541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:33.987007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:33.987026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:34.000466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:34.000484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:34.014070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:34.014088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:34.027739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:34.027762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:34.041354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:34.041372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.932 [2024-12-10 04:45:34.054980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.932 [2024-12-10 04:45:34.054999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.068672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.068690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.082837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.082855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.096750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.096768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.110408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.110427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.124032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.124050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.137516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.137534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.151078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.151096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.164765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.164784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.178837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.178856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.192443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.192461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.206014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.206033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.219755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.219774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.233478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.233496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.247276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.247295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.261134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.261152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.274891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.274908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.288374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.288393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.302160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.302184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.191 [2024-12-10 04:45:34.315864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.191 [2024-12-10 04:45:34.315882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.329645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.329664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.343442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.343460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.356818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.356837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.370732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.370750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.384096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.384115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.397846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.397864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.411422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.411440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.424996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.425015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.438342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.438361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.452276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.452295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.465721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.465739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.479771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.479789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.493306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.493324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.506506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.506525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.520025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.520043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.533789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.533807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.547630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.547649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.561730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.561748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.450 [2024-12-10 04:45:34.573025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.450 [2024-12-10 04:45:34.573043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.587154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.587178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.600635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.600653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.614300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.614318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.628226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.628245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 17040.60 IOPS, 133.13 MiB/s [2024-12-10T03:45:34.848Z] [2024-12-10 04:45:34.641514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.641533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 00:08:43.711 Latency(us) 00:08:43.711 [2024-12-10T03:45:34.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.711 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:43.711 Nvme1n1 : 5.01 17043.05 133.15 0.00 0.00 7503.03 3588.88 18225.25 00:08:43.711 [2024-12-10T03:45:34.848Z] =================================================================================================================== 00:08:43.711 [2024-12-10T03:45:34.848Z] Total : 17043.05 133.15 0.00 0.00 7503.03 3588.88 18225.25 00:08:43.711 [2024-12-10 04:45:34.650459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.650476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.662489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.662503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.674532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.674550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.686555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.686571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.698586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.698600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.710615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.710627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.722648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.722662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.734681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.734703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.746712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.746730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.758742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.758754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.770772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.770781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.782807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.782819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 [2024-12-10 04:45:34.794836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.711 [2024-12-10 04:45:34.794845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (503384) - No such process 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 503384 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.711 delay0 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.711 04:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:43.970 [2024-12-10 04:45:34.982311] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:50.534 Initializing NVMe Controllers 00:08:50.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:50.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:50.534 Initialization complete. Launching workers. 00:08:50.534 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1129 00:08:50.534 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1416, failed to submit 33 00:08:50.534 success 1217, unsuccessful 199, failed 0 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.534 rmmod nvme_tcp 00:08:50.534 rmmod nvme_fabrics 00:08:50.534 rmmod nvme_keyring 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 501570 ']' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 501570 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 501570 ']' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 501570 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501570 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501570' 00:08:50.534 killing process with pid 501570 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 501570 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 501570 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.534 04:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.069 00:08:53.069 real 0m31.574s 00:08:53.069 user 0m42.533s 00:08:53.069 sys 0m10.973s 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.069 ************************************ 00:08:53.069 END TEST nvmf_zcopy 00:08:53.069 ************************************ 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.069 ************************************ 00:08:53.069 START TEST nvmf_nmic 00:08:53.069 ************************************ 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:53.069 * Looking for test storage... 00:08:53.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.069 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.070 --rc genhtml_branch_coverage=1 00:08:53.070 --rc genhtml_function_coverage=1 00:08:53.070 --rc genhtml_legend=1 00:08:53.070 --rc geninfo_all_blocks=1 00:08:53.070 --rc geninfo_unexecuted_blocks=1 00:08:53.070 00:08:53.070 ' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.070 --rc genhtml_branch_coverage=1 00:08:53.070 --rc genhtml_function_coverage=1 00:08:53.070 --rc genhtml_legend=1 00:08:53.070 --rc geninfo_all_blocks=1 00:08:53.070 --rc geninfo_unexecuted_blocks=1 00:08:53.070 00:08:53.070 ' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.070 --rc genhtml_branch_coverage=1 00:08:53.070 --rc genhtml_function_coverage=1 00:08:53.070 --rc genhtml_legend=1 00:08:53.070 --rc geninfo_all_blocks=1 00:08:53.070 --rc geninfo_unexecuted_blocks=1 00:08:53.070 00:08:53.070 ' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.070 --rc genhtml_branch_coverage=1 00:08:53.070 --rc genhtml_function_coverage=1 00:08:53.070 --rc genhtml_legend=1 00:08:53.070 --rc geninfo_all_blocks=1 00:08:53.070 --rc geninfo_unexecuted_blocks=1 00:08:53.070 00:08:53.070 ' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.070 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.071 04:45:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:59.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:59.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:59.642 Found net devices under 0000:af:00.0: cvl_0_0 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:59.642 Found net devices under 0000:af:00.1: cvl_0_1 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.642 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:08:59.643 00:08:59.643 --- 10.0.0.2 ping statistics --- 00:08:59.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.643 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:59.643 00:08:59.643 --- 10.0.0.1 ping statistics --- 00:08:59.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.643 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.643 04:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=508899 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 508899 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 508899 ']' 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 [2024-12-10 04:45:50.062434] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:59.643 [2024-12-10 04:45:50.062487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.643 [2024-12-10 04:45:50.145639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.643 [2024-12-10 04:45:50.188013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.643 [2024-12-10 04:45:50.188051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.643 [2024-12-10 04:45:50.188058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.643 [2024-12-10 04:45:50.188064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.643 [2024-12-10 04:45:50.188069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.643 [2024-12-10 04:45:50.189386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.643 [2024-12-10 04:45:50.189495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.643 [2024-12-10 04:45:50.189521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.643 [2024-12-10 04:45:50.189522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 [2024-12-10 04:45:50.338392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 Malloc0 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 [2024-12-10 04:45:50.401717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:59.643 test case1: single bdev can't be used in multiple subsystems 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.643 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.643 [2024-12-10 04:45:50.425609] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:59.643 [2024-12-10 04:45:50.425628] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:59.644 [2024-12-10 04:45:50.425635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.644 request: 00:08:59.644 { 00:08:59.644 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.644 "namespace": { 00:08:59.644 "bdev_name": "Malloc0", 00:08:59.644 "no_auto_visible": false, 00:08:59.644 "hide_metadata": false 00:08:59.644 }, 00:08:59.644 "method": "nvmf_subsystem_add_ns", 00:08:59.644 "req_id": 1 00:08:59.644 } 00:08:59.644 Got JSON-RPC error response 00:08:59.644 response: 00:08:59.644 { 00:08:59.644 "code": -32602, 00:08:59.644 "message": "Invalid parameters" 00:08:59.644 } 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:59.644 Adding namespace failed - expected result. 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:59.644 test case2: host connect to nvmf target in multiple paths 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.644 [2024-12-10 04:45:50.437713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.644 04:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.726 04:45:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:01.660 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.660 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:01.660 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.660 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:01.660 04:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:04.187 04:45:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.187 [global] 00:09:04.187 thread=1 00:09:04.187 invalidate=1 00:09:04.187 rw=write 00:09:04.187 time_based=1 00:09:04.187 runtime=1 00:09:04.187 ioengine=libaio 00:09:04.187 direct=1 00:09:04.187 bs=4096 00:09:04.187 iodepth=1 00:09:04.187 norandommap=0 00:09:04.187 numjobs=1 00:09:04.187 00:09:04.187 verify_dump=1 00:09:04.187 verify_backlog=512 00:09:04.187 verify_state_save=0 00:09:04.187 do_verify=1 00:09:04.187 verify=crc32c-intel 00:09:04.187 [job0] 00:09:04.187 filename=/dev/nvme0n1 00:09:04.187 Could not set queue depth (nvme0n1) 00:09:04.187 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.187 fio-3.35 00:09:04.187 Starting 1 thread 00:09:05.120 00:09:05.120 job0: (groupid=0, jobs=1): err= 0: pid=509922: Tue Dec 10 04:45:56 2024 00:09:05.120 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:09:05.120 slat (nsec): min=9426, max=24366, avg=22360.64, stdev=2947.56 00:09:05.120 clat (usec): min=40712, max=41119, avg=40955.92, stdev=95.68 00:09:05.120 lat (usec): min=40721, max=41141, avg=40978.28, stdev=97.25 00:09:05.120 clat percentiles (usec): 00:09:05.120 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:05.120 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:05.120 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:05.120 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:05.120 | 99.99th=[41157] 00:09:05.120 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:05.120 slat (usec): min=10, max=28093, avg=66.89, stdev=1241.03 00:09:05.120 clat (usec): min=111, max=329, avg=145.20, stdev=31.88 00:09:05.120 lat (usec): min=122, max=28423, avg=212.10, stdev=1249.60 00:09:05.120 clat percentiles (usec): 00:09:05.120 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 122], 20.00th=[ 124], 00:09:05.120 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 137], 00:09:05.120 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 237], 00:09:05.120 | 99.00th=[ 247], 99.50th=[ 273], 99.90th=[ 330], 99.95th=[ 330], 00:09:05.120 | 99.99th=[ 330] 00:09:05.120 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:09:05.120 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:05.120 lat (usec) : 250=95.32%, 500=0.56% 00:09:05.120 lat (msec) : 50=4.12% 00:09:05.120 cpu : usr=0.49%, sys=0.79%, ctx=536, majf=0, minf=1 00:09:05.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.120 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.120 00:09:05.120 Run status group 0 (all jobs): 00:09:05.120 READ: bw=87.0KiB/s (89.0kB/s), 87.0KiB/s-87.0KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1012-1012msec 00:09:05.120 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:09:05.120 00:09:05.120 Disk stats (read/write): 00:09:05.120 nvme0n1: ios=45/512, merge=0/0, ticks=1764/61, in_queue=1825, util=98.40% 00:09:05.120 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:05.378 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.378 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.379 rmmod nvme_tcp 00:09:05.379 rmmod nvme_fabrics 00:09:05.379 rmmod nvme_keyring 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 508899 ']' 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 508899 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 508899 ']' 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 508899 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.379 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 508899 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 508899' 00:09:05.638 killing process with pid 508899 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 508899 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 508899 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.638 04:45:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.173 00:09:08.173 real 0m15.043s 00:09:08.173 user 0m33.337s 00:09:08.173 sys 0m5.204s 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.173 ************************************ 00:09:08.173 END TEST nvmf_nmic 00:09:08.173 ************************************ 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.173 ************************************ 00:09:08.173 START TEST nvmf_fio_target 00:09:08.173 ************************************ 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:08.173 * Looking for test storage... 00:09:08.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.173 04:45:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.173 --rc genhtml_branch_coverage=1 00:09:08.173 --rc genhtml_function_coverage=1 00:09:08.173 --rc genhtml_legend=1 00:09:08.173 --rc geninfo_all_blocks=1 00:09:08.173 --rc geninfo_unexecuted_blocks=1 00:09:08.173 00:09:08.173 ' 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.173 --rc genhtml_branch_coverage=1 00:09:08.173 --rc genhtml_function_coverage=1 00:09:08.173 --rc genhtml_legend=1 00:09:08.173 --rc geninfo_all_blocks=1 00:09:08.173 --rc geninfo_unexecuted_blocks=1 00:09:08.173 00:09:08.173 ' 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.173 --rc genhtml_branch_coverage=1 00:09:08.173 --rc genhtml_function_coverage=1 00:09:08.173 --rc genhtml_legend=1 00:09:08.173 --rc geninfo_all_blocks=1 00:09:08.173 --rc geninfo_unexecuted_blocks=1 00:09:08.173 00:09:08.173 ' 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.173 --rc genhtml_branch_coverage=1 00:09:08.173 --rc genhtml_function_coverage=1 00:09:08.173 --rc genhtml_legend=1 00:09:08.173 --rc geninfo_all_blocks=1 00:09:08.173 --rc geninfo_unexecuted_blocks=1 00:09:08.173 00:09:08.173 ' 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.173 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.174 04:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.744 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:14.745 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:14.745 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:14.745 Found net devices under 0000:af:00.0: cvl_0_0 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:14.745 Found net devices under 0000:af:00.1: cvl_0_1 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:09:14.745 00:09:14.745 --- 10.0.0.2 ping statistics --- 00:09:14.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.745 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:09:14.745 00:09:14.745 --- 10.0.0.1 ping statistics --- 00:09:14.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.745 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.745 04:46:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=513625 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 513625 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 513625 ']' 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.745 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.745 [2024-12-10 04:46:05.068030] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:14.745 [2024-12-10 04:46:05.068080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.745 [2024-12-10 04:46:05.145087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.746 [2024-12-10 04:46:05.185647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.746 [2024-12-10 04:46:05.185684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.746 [2024-12-10 04:46:05.185691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.746 [2024-12-10 04:46:05.185697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.746 [2024-12-10 04:46:05.185702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.746 [2024-12-10 04:46:05.187155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.746 [2024-12-10 04:46:05.187266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.746 [2024-12-10 04:46:05.187300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.746 [2024-12-10 04:46:05.187301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.746 [2024-12-10 04:46:05.517554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:14.746 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.004 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:15.004 04:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.263 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:15.263 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.521 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:15.521 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:15.521 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.779 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:15.779 04:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.038 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:16.038 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.297 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:16.297 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:16.297 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.555 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:16.555 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.814 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:16.814 04:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.072 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.072 [2024-12-10 04:46:08.202387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.330 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:17.330 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:17.589 04:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.965 04:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:18.965 04:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:18.965 04:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.965 04:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:18.965 04:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:18.965 04:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:20.865 04:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.865 [global] 00:09:20.865 thread=1 00:09:20.865 invalidate=1 00:09:20.865 rw=write 00:09:20.865 time_based=1 00:09:20.865 runtime=1 00:09:20.865 ioengine=libaio 00:09:20.865 direct=1 00:09:20.865 bs=4096 00:09:20.865 iodepth=1 00:09:20.865 norandommap=0 00:09:20.865 numjobs=1 00:09:20.865 00:09:20.865 verify_dump=1 00:09:20.865 verify_backlog=512 00:09:20.865 verify_state_save=0 00:09:20.865 do_verify=1 00:09:20.865 verify=crc32c-intel 00:09:20.865 [job0] 00:09:20.865 filename=/dev/nvme0n1 00:09:20.865 [job1] 00:09:20.865 filename=/dev/nvme0n2 00:09:20.865 [job2] 00:09:20.865 filename=/dev/nvme0n3 00:09:20.865 [job3] 00:09:20.865 filename=/dev/nvme0n4 00:09:20.865 Could not set queue depth (nvme0n1) 00:09:20.865 Could not set queue depth (nvme0n2) 00:09:20.865 Could not set queue depth (nvme0n3) 00:09:20.865 Could not set queue depth (nvme0n4) 00:09:21.123 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.123 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.123 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.123 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.123 fio-3.35 00:09:21.123 Starting 4 threads 00:09:22.522 00:09:22.522 job0: (groupid=0, jobs=1): err= 0: pid=515024: Tue Dec 10 04:46:13 2024 00:09:22.522 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:09:22.522 slat (nsec): min=10074, max=26695, avg=22935.36, stdev=3010.94 00:09:22.522 clat (usec): min=40827, max=41096, avg=40970.70, stdev=71.83 00:09:22.522 lat (usec): min=40851, max=41120, avg=40993.64, stdev=71.62 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:22.522 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:22.522 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:22.522 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:22.522 | 99.99th=[41157] 00:09:22.522 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:22.522 slat (nsec): min=10897, max=40097, avg=12434.18, stdev=2117.72 00:09:22.522 clat (usec): min=136, max=282, avg=190.73, stdev=20.74 00:09:22.522 lat (usec): min=148, max=323, avg=203.16, stdev=20.94 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 176], 00:09:22.522 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:09:22.522 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 239], 00:09:22.522 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 285], 99.95th=[ 285], 00:09:22.522 | 99.99th=[ 285] 00:09:22.522 bw ( KiB/s): min= 4096, max= 4096, per=14.54%, avg=4096.00, stdev= 0.00, samples=1 00:09:22.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:22.522 lat (usec) : 250=95.51%, 500=0.37% 00:09:22.522 lat (msec) : 50=4.12% 00:09:22.522 cpu : usr=0.50%, sys=0.89%, ctx=535, majf=0, minf=1 00:09:22.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.522 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.522 job1: (groupid=0, jobs=1): err= 0: pid=515040: Tue Dec 10 04:46:13 2024 00:09:22.522 read: IOPS=2443, BW=9774KiB/s (10.0MB/s)(9784KiB/1001msec) 00:09:22.522 slat (nsec): min=7622, max=42041, avg=8619.96, stdev=1368.63 00:09:22.522 clat (usec): min=170, max=330, avg=217.25, stdev=25.00 00:09:22.522 lat (usec): min=178, max=357, avg=225.87, stdev=24.97 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:22.522 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:09:22.522 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 265], 00:09:22.522 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 314], 00:09:22.522 | 99.99th=[ 330] 00:09:22.522 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:22.522 slat (nsec): min=11277, max=41866, avg=12705.87, stdev=1746.98 00:09:22.522 clat (usec): min=116, max=296, avg=155.61, stdev=17.14 00:09:22.522 lat (usec): min=128, max=338, avg=168.32, stdev=17.36 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:09:22.522 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:09:22.522 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 192], 00:09:22.522 | 99.00th=[ 204], 99.50th=[ 206], 99.90th=[ 237], 99.95th=[ 247], 00:09:22.522 | 99.99th=[ 297] 00:09:22.522 bw ( KiB/s): min=12248, max=12248, per=43.49%, avg=12248.00, stdev= 0.00, samples=1 00:09:22.522 iops : min= 3062, max= 3062, avg=3062.00, stdev= 0.00, samples=1 00:09:22.522 lat (usec) : 250=93.69%, 500=6.31% 00:09:22.522 cpu : usr=4.00%, sys=8.40%, ctx=5007, majf=0, minf=2 00:09:22.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.522 issued rwts: total=2446,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.522 job2: (groupid=0, jobs=1): err= 0: pid=515057: Tue Dec 10 04:46:13 2024 00:09:22.522 read: IOPS=1127, BW=4511KiB/s (4619kB/s)(4592KiB/1018msec) 00:09:22.522 slat (nsec): min=7244, max=27724, avg=8316.44, stdev=1089.74 00:09:22.522 clat (usec): min=192, max=41990, avg=597.18, stdev=3802.91 00:09:22.522 lat (usec): min=201, max=42001, avg=605.49, stdev=3803.22 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:09:22.522 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:09:22.522 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:09:22.522 | 99.00th=[ 490], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:09:22.522 | 99.99th=[42206] 00:09:22.522 write: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec); 0 zone resets 00:09:22.522 slat (nsec): min=10151, max=45298, avg=11937.75, stdev=1807.45 00:09:22.522 clat (usec): min=136, max=330, avg=192.56, stdev=23.99 00:09:22.522 lat (usec): min=148, max=342, avg=204.50, stdev=24.02 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:09:22.522 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:09:22.522 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 231], 00:09:22.522 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 330], 00:09:22.522 | 99.99th=[ 330] 00:09:22.522 bw ( KiB/s): min= 3160, max= 9128, per=21.81%, avg=6144.00, stdev=4220.01, samples=2 00:09:22.522 iops : min= 790, max= 2282, avg=1536.00, stdev=1055.00, samples=2 00:09:22.522 lat (usec) : 250=87.85%, 500=11.77% 00:09:22.522 lat (msec) : 50=0.37% 00:09:22.522 cpu : usr=2.26%, sys=4.23%, ctx=2684, majf=0, minf=2 00:09:22.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.522 issued rwts: total=1148,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.522 job3: (groupid=0, jobs=1): err= 0: pid=515060: Tue Dec 10 04:46:13 2024 00:09:22.522 read: IOPS=2195, BW=8783KiB/s (8994kB/s)(8792KiB/1001msec) 00:09:22.522 slat (nsec): min=7390, max=44627, avg=8522.63, stdev=1600.75 00:09:22.522 clat (usec): min=183, max=40678, avg=238.64, stdev=863.11 00:09:22.522 lat (usec): min=192, max=40687, avg=247.17, stdev=863.12 00:09:22.522 clat percentiles (usec): 00:09:22.522 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:09:22.523 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:09:22.523 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 247], 00:09:22.523 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 330], 99.95th=[ 461], 00:09:22.523 | 99.99th=[40633] 00:09:22.523 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:22.523 slat (nsec): min=10932, max=39835, avg=12324.81, stdev=1716.34 00:09:22.523 clat (usec): min=121, max=256, avg=160.64, stdev=15.02 00:09:22.523 lat (usec): min=132, max=268, avg=172.96, stdev=15.36 00:09:22.523 clat percentiles (usec): 00:09:22.523 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:22.523 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:09:22.523 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:09:22.523 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 229], 99.95th=[ 239], 00:09:22.523 | 99.99th=[ 258] 00:09:22.523 bw ( KiB/s): min= 9504, max= 9504, per=33.74%, avg=9504.00, stdev= 0.00, samples=1 00:09:22.523 iops : min= 2376, max= 2376, avg=2376.00, stdev= 0.00, samples=1 00:09:22.523 lat (usec) : 250=98.28%, 500=1.70% 00:09:22.523 lat (msec) : 50=0.02% 00:09:22.523 cpu : usr=3.10%, sys=8.70%, ctx=4759, majf=0, minf=1 00:09:22.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.523 issued rwts: total=2198,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.523 00:09:22.523 Run status group 0 (all jobs): 00:09:22.523 READ: bw=22.3MiB/s (23.4MB/s), 87.3KiB/s-9774KiB/s (89.4kB/s-10.0MB/s), io=22.7MiB (23.8MB), run=1001-1018msec 00:09:22.523 WRITE: bw=27.5MiB/s (28.8MB/s), 2032KiB/s-9.99MiB/s (2081kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1018msec 00:09:22.523 00:09:22.523 Disk stats (read/write): 00:09:22.523 nvme0n1: ios=41/512, merge=0/0, ticks=1601/92, in_queue=1693, util=85.57% 00:09:22.523 nvme0n2: ios=2074/2192, merge=0/0, ticks=1336/310, in_queue=1646, util=89.73% 00:09:22.523 nvme0n3: ios=1200/1536, merge=0/0, ticks=531/272, in_queue=803, util=94.27% 00:09:22.523 nvme0n4: ios=1951/2048, merge=0/0, ticks=1350/297, in_queue=1647, util=94.12% 00:09:22.523 04:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:22.523 [global] 00:09:22.523 thread=1 00:09:22.523 invalidate=1 00:09:22.523 rw=randwrite 00:09:22.523 time_based=1 00:09:22.523 runtime=1 00:09:22.523 ioengine=libaio 00:09:22.523 direct=1 00:09:22.523 bs=4096 00:09:22.523 iodepth=1 00:09:22.523 norandommap=0 00:09:22.523 numjobs=1 00:09:22.523 00:09:22.523 verify_dump=1 00:09:22.523 verify_backlog=512 00:09:22.523 verify_state_save=0 00:09:22.523 do_verify=1 00:09:22.523 verify=crc32c-intel 00:09:22.523 [job0] 00:09:22.523 filename=/dev/nvme0n1 00:09:22.523 [job1] 00:09:22.523 filename=/dev/nvme0n2 00:09:22.523 [job2] 00:09:22.523 filename=/dev/nvme0n3 00:09:22.523 [job3] 00:09:22.523 filename=/dev/nvme0n4 00:09:22.523 Could not set queue depth (nvme0n1) 00:09:22.523 Could not set queue depth (nvme0n2) 00:09:22.523 Could not set queue depth (nvme0n3) 00:09:22.523 Could not set queue depth (nvme0n4) 00:09:22.784 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.784 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.784 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.784 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.784 fio-3.35 00:09:22.784 Starting 4 threads 00:09:24.168 00:09:24.168 job0: (groupid=0, jobs=1): err= 0: pid=515494: Tue Dec 10 04:46:14 2024 00:09:24.168 read: IOPS=1535, BW=6143KiB/s (6291kB/s)(6180KiB/1006msec) 00:09:24.168 slat (nsec): min=4966, max=40878, avg=8228.32, stdev=2017.58 00:09:24.168 clat (usec): min=160, max=40983, avg=435.13, stdev=3098.96 00:09:24.168 lat (usec): min=168, max=41007, avg=443.36, stdev=3100.03 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:09:24.168 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:09:24.168 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:09:24.168 | 99.00th=[ 241], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:24.168 | 99.99th=[41157] 00:09:24.168 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:09:24.168 slat (nsec): min=7021, max=37743, avg=10681.57, stdev=2398.74 00:09:24.168 clat (usec): min=112, max=360, avg=141.06, stdev=18.92 00:09:24.168 lat (usec): min=122, max=367, avg=151.74, stdev=19.19 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 127], 00:09:24.168 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:09:24.168 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 180], 00:09:24.168 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 302], 99.95th=[ 322], 00:09:24.168 | 99.99th=[ 363] 00:09:24.168 bw ( KiB/s): min= 4096, max=12263, per=59.33%, avg=8179.50, stdev=5774.94, samples=2 00:09:24.168 iops : min= 1024, max= 3065, avg=2044.50, stdev=1443.20, samples=2 00:09:24.168 lat (usec) : 250=99.61%, 500=0.14% 00:09:24.168 lat (msec) : 50=0.25% 00:09:24.168 cpu : usr=2.19%, sys=5.97%, ctx=3594, majf=0, minf=1 00:09:24.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 issued rwts: total=1545,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.168 job1: (groupid=0, jobs=1): err= 0: pid=515516: Tue Dec 10 04:46:14 2024 00:09:24.168 read: IOPS=39, BW=160KiB/s (163kB/s)(160KiB/1003msec) 00:09:24.168 slat (nsec): min=7476, max=26220, avg=15911.27, stdev=7503.18 00:09:24.168 clat (usec): min=294, max=42080, avg=21834.41, stdev=20687.75 00:09:24.168 lat (usec): min=316, max=42087, avg=21850.32, stdev=20692.74 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 293], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 367], 00:09:24.168 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[40633], 60.00th=[41157], 00:09:24.168 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:24.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:24.168 | 99.99th=[42206] 00:09:24.168 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:24.168 slat (nsec): min=4398, max=41841, avg=11383.51, stdev=2560.58 00:09:24.168 clat (usec): min=125, max=640, avg=237.93, stdev=29.23 00:09:24.168 lat (usec): min=132, max=647, avg=249.32, stdev=29.84 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 151], 5.00th=[ 202], 10.00th=[ 231], 20.00th=[ 237], 00:09:24.168 | 30.00th=[ 239], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 241], 00:09:24.168 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:24.168 | 99.00th=[ 262], 99.50th=[ 383], 99.90th=[ 644], 99.95th=[ 644], 00:09:24.168 | 99.99th=[ 644] 00:09:24.168 bw ( KiB/s): min= 4087, max= 4087, per=29.65%, avg=4087.00, stdev= 0.00, samples=1 00:09:24.168 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:24.168 lat (usec) : 250=91.12%, 500=4.71%, 750=0.36% 00:09:24.168 lat (msec) : 50=3.80% 00:09:24.168 cpu : usr=0.20%, sys=0.70%, ctx=552, majf=0, minf=2 00:09:24.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.168 job2: (groupid=0, jobs=1): err= 0: pid=515533: Tue Dec 10 04:46:14 2024 00:09:24.168 read: IOPS=42, BW=169KiB/s (173kB/s)(176KiB/1040msec) 00:09:24.168 slat (nsec): min=8093, max=29677, avg=16556.61, stdev=7916.59 00:09:24.168 clat (usec): min=230, max=42018, avg=20708.78, stdev=20584.84 00:09:24.168 lat (usec): min=239, max=42048, avg=20725.33, stdev=20590.96 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 231], 5.00th=[ 363], 10.00th=[ 363], 20.00th=[ 367], 00:09:24.168 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[41157], 00:09:24.168 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:24.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:24.168 | 99.99th=[42206] 00:09:24.168 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:24.168 slat (nsec): min=10809, max=49687, avg=12283.87, stdev=2592.90 00:09:24.168 clat (usec): min=155, max=446, avg=234.35, stdev=23.82 00:09:24.168 lat (usec): min=166, max=459, avg=246.64, stdev=24.04 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 196], 20.00th=[ 235], 00:09:24.168 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:09:24.168 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:24.168 | 99.00th=[ 281], 99.50th=[ 355], 99.90th=[ 445], 99.95th=[ 445], 00:09:24.168 | 99.99th=[ 445] 00:09:24.168 bw ( KiB/s): min= 4087, max= 4087, per=29.65%, avg=4087.00, stdev= 0.00, samples=1 00:09:24.168 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:24.168 lat (usec) : 250=89.93%, 500=6.12% 00:09:24.168 lat (msec) : 50=3.96% 00:09:24.168 cpu : usr=0.19%, sys=1.15%, ctx=557, majf=0, minf=1 00:09:24.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 issued rwts: total=44,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.168 job3: (groupid=0, jobs=1): err= 0: pid=515534: Tue Dec 10 04:46:14 2024 00:09:24.168 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:09:24.168 slat (nsec): min=9047, max=23125, avg=10380.68, stdev=2957.99 00:09:24.168 clat (usec): min=40829, max=42024, avg=41088.93, stdev=313.20 00:09:24.168 lat (usec): min=40852, max=42033, avg=41099.31, stdev=312.77 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:24.168 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:24.168 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:24.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:24.168 | 99.99th=[42206] 00:09:24.168 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:24.168 slat (nsec): min=6782, max=34467, avg=10805.00, stdev=2055.45 00:09:24.168 clat (usec): min=128, max=297, avg=177.72, stdev=30.14 00:09:24.168 lat (usec): min=138, max=332, avg=188.53, stdev=30.12 00:09:24.168 clat percentiles (usec): 00:09:24.168 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:24.168 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:09:24.168 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 241], 95.00th=[ 245], 00:09:24.168 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 297], 00:09:24.168 | 99.99th=[ 297] 00:09:24.168 bw ( KiB/s): min= 4087, max= 4087, per=29.65%, avg=4087.00, stdev= 0.00, samples=1 00:09:24.168 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:24.168 lat (usec) : 250=94.57%, 500=1.31% 00:09:24.168 lat (msec) : 50=4.12% 00:09:24.168 cpu : usr=0.40%, sys=0.30%, ctx=536, majf=0, minf=1 00:09:24.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.168 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.168 00:09:24.168 Run status group 0 (all jobs): 00:09:24.169 READ: bw=6350KiB/s (6502kB/s), 87.8KiB/s-6143KiB/s (89.9kB/s-6291kB/s), io=6604KiB (6762kB), run=1002-1040msec 00:09:24.169 WRITE: bw=13.5MiB/s (14.1MB/s), 1969KiB/s-8143KiB/s (2016kB/s-8339kB/s), io=14.0MiB (14.7MB), run=1002-1040msec 00:09:24.169 00:09:24.169 Disk stats (read/write): 00:09:24.169 nvme0n1: ios=1589/2048, merge=0/0, ticks=619/278, in_queue=897, util=89.58% 00:09:24.169 nvme0n2: ios=85/512, merge=0/0, ticks=936/119, in_queue=1055, util=90.66% 00:09:24.169 nvme0n3: ios=82/512, merge=0/0, ticks=829/115, in_queue=944, util=98.92% 00:09:24.169 nvme0n4: ios=74/512, merge=0/0, ticks=1016/90, in_queue=1106, util=98.13% 00:09:24.169 04:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:24.169 [global] 00:09:24.169 thread=1 00:09:24.169 invalidate=1 00:09:24.169 rw=write 00:09:24.169 time_based=1 00:09:24.169 runtime=1 00:09:24.169 ioengine=libaio 00:09:24.169 direct=1 00:09:24.169 bs=4096 00:09:24.169 iodepth=128 00:09:24.169 norandommap=0 00:09:24.169 numjobs=1 00:09:24.169 00:09:24.169 verify_dump=1 00:09:24.169 verify_backlog=512 00:09:24.169 verify_state_save=0 00:09:24.169 do_verify=1 00:09:24.169 verify=crc32c-intel 00:09:24.169 [job0] 00:09:24.169 filename=/dev/nvme0n1 00:09:24.169 [job1] 00:09:24.169 filename=/dev/nvme0n2 00:09:24.169 [job2] 00:09:24.169 filename=/dev/nvme0n3 00:09:24.169 [job3] 00:09:24.169 filename=/dev/nvme0n4 00:09:24.169 Could not set queue depth (nvme0n1) 00:09:24.169 Could not set queue depth (nvme0n2) 00:09:24.169 Could not set queue depth (nvme0n3) 00:09:24.169 Could not set queue depth (nvme0n4) 00:09:24.425 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.425 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.425 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.425 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.425 fio-3.35 00:09:24.425 Starting 4 threads 00:09:25.793 00:09:25.793 job0: (groupid=0, jobs=1): err= 0: pid=515894: Tue Dec 10 04:46:16 2024 00:09:25.793 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:09:25.793 slat (nsec): min=1332, max=10222k, avg=90914.33, stdev=642739.79 00:09:25.793 clat (usec): min=2086, max=22937, avg=11339.24, stdev=3202.46 00:09:25.793 lat (usec): min=2096, max=27369, avg=11430.15, stdev=3239.97 00:09:25.793 clat percentiles (usec): 00:09:25.793 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9503], 00:09:25.793 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:09:25.793 | 70.00th=[11600], 80.00th=[13829], 90.00th=[16319], 95.00th=[17695], 00:09:25.793 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:09:25.793 | 99.99th=[22938] 00:09:25.793 write: IOPS=6233, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1006msec); 0 zone resets 00:09:25.793 slat (usec): min=2, max=7768, avg=63.27, stdev=249.59 00:09:25.793 clat (usec): min=1475, max=21320, avg=9243.96, stdev=2223.99 00:09:25.793 lat (usec): min=1490, max=21334, avg=9307.23, stdev=2244.68 00:09:25.793 clat percentiles (usec): 00:09:25.793 | 1.00th=[ 2409], 5.00th=[ 4359], 10.00th=[ 5800], 20.00th=[ 7832], 00:09:25.793 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:09:25.793 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11338], 95.00th=[11600], 00:09:25.793 | 99.00th=[11994], 99.50th=[13960], 99.90th=[20841], 99.95th=[21365], 00:09:25.793 | 99.99th=[21365] 00:09:25.793 bw ( KiB/s): min=24576, max=25088, per=33.79%, avg=24832.00, stdev=362.04, samples=2 00:09:25.793 iops : min= 6144, max= 6272, avg=6208.00, stdev=90.51, samples=2 00:09:25.793 lat (msec) : 2=0.42%, 4=1.56%, 10=40.17%, 20=56.89%, 50=0.96% 00:09:25.793 cpu : usr=4.78%, sys=6.47%, ctx=806, majf=0, minf=1 00:09:25.793 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:25.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.793 issued rwts: total=6144,6271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.793 job1: (groupid=0, jobs=1): err= 0: pid=515897: Tue Dec 10 04:46:16 2024 00:09:25.793 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:09:25.793 slat (nsec): min=1393, max=17153k, avg=108456.68, stdev=684534.12 00:09:25.793 clat (usec): min=433, max=53999, avg=13613.67, stdev=8723.52 00:09:25.793 lat (usec): min=3246, max=54007, avg=13722.13, stdev=8766.96 00:09:25.793 clat percentiles (usec): 00:09:25.793 | 1.00th=[ 6849], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10421], 00:09:25.793 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:09:25.793 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14091], 95.00th=[35914], 00:09:25.793 | 99.00th=[50594], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:09:25.793 | 99.99th=[53740] 00:09:25.794 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:25.794 slat (usec): min=2, max=3300, avg=81.71, stdev=365.80 00:09:25.794 clat (usec): min=7291, max=23162, avg=11115.26, stdev=1653.27 00:09:25.794 lat (usec): min=7408, max=23166, avg=11196.97, stdev=1628.90 00:09:25.794 clat percentiles (usec): 00:09:25.794 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10028], 00:09:25.794 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:09:25.794 | 70.00th=[11338], 80.00th=[12518], 90.00th=[13435], 95.00th=[13566], 00:09:25.794 | 99.00th=[15533], 99.50th=[21103], 99.90th=[23200], 99.95th=[23200], 00:09:25.794 | 99.99th=[23200] 00:09:25.794 bw ( KiB/s): min=18944, max=22016, per=27.87%, avg=20480.00, stdev=2172.23, samples=2 00:09:25.794 iops : min= 4736, max= 5504, avg=5120.00, stdev=543.06, samples=2 00:09:25.794 lat (usec) : 500=0.01% 00:09:25.794 lat (msec) : 4=0.31%, 10=14.20%, 20=81.44%, 50=3.13%, 100=0.91% 00:09:25.794 cpu : usr=3.79%, sys=5.38%, ctx=590, majf=0, minf=1 00:09:25.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:25.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.794 issued rwts: total=5120,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.794 job2: (groupid=0, jobs=1): err= 0: pid=515898: Tue Dec 10 04:46:16 2024 00:09:25.794 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:25.794 slat (nsec): min=1938, max=18645k, avg=168825.47, stdev=947630.11 00:09:25.794 clat (usec): min=5373, max=61844, avg=21609.84, stdev=14018.65 00:09:25.794 lat (usec): min=5376, max=61854, avg=21778.66, stdev=14119.26 00:09:25.794 clat percentiles (usec): 00:09:25.794 | 1.00th=[ 5407], 5.00th=[ 9503], 10.00th=[11207], 20.00th=[11994], 00:09:25.794 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14615], 60.00th=[16057], 00:09:25.794 | 70.00th=[21365], 80.00th=[33162], 90.00th=[46400], 95.00th=[52167], 00:09:25.794 | 99.00th=[59507], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:09:25.794 | 99.99th=[61604] 00:09:25.794 write: IOPS=3077, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1007msec); 0 zone resets 00:09:25.794 slat (usec): min=3, max=21639, avg=143.73, stdev=704.97 00:09:25.794 clat (usec): min=3798, max=56924, avg=18649.50, stdev=8905.73 00:09:25.794 lat (usec): min=8890, max=56935, avg=18793.22, stdev=8955.16 00:09:25.794 clat percentiles (usec): 00:09:25.794 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[11469], 20.00th=[12911], 00:09:25.794 | 30.00th=[13173], 40.00th=[13435], 50.00th=[14746], 60.00th=[16909], 00:09:25.794 | 70.00th=[21890], 80.00th=[23200], 90.00th=[30802], 95.00th=[40633], 00:09:25.794 | 99.00th=[50594], 99.50th=[52167], 99.90th=[56886], 99.95th=[56886], 00:09:25.794 | 99.99th=[56886] 00:09:25.794 bw ( KiB/s): min=12288, max=12288, per=16.72%, avg=12288.00, stdev= 0.00, samples=2 00:09:25.794 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:25.794 lat (msec) : 4=0.02%, 10=3.84%, 20=62.70%, 50=28.59%, 100=4.86% 00:09:25.794 cpu : usr=2.98%, sys=4.67%, ctx=316, majf=0, minf=1 00:09:25.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:25.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.794 issued rwts: total=3072,3099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.794 job3: (groupid=0, jobs=1): err= 0: pid=515899: Tue Dec 10 04:46:16 2024 00:09:25.794 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:09:25.794 slat (nsec): min=1612, max=9178.2k, avg=120036.71, stdev=693437.32 00:09:25.794 clat (usec): min=8084, max=25325, avg=15072.32, stdev=2379.07 00:09:25.794 lat (usec): min=8092, max=25351, avg=15192.36, stdev=2450.32 00:09:25.794 clat percentiles (usec): 00:09:25.794 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[12911], 20.00th=[13304], 00:09:25.794 | 30.00th=[13698], 40.00th=[14222], 50.00th=[15008], 60.00th=[15401], 00:09:25.794 | 70.00th=[15926], 80.00th=[16909], 90.00th=[17957], 95.00th=[18744], 00:09:25.794 | 99.00th=[22676], 99.50th=[23462], 99.90th=[23987], 99.95th=[25035], 00:09:25.794 | 99.99th=[25297] 00:09:25.794 write: IOPS=3983, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1007msec); 0 zone resets 00:09:25.794 slat (usec): min=2, max=21616, avg=135.00, stdev=670.79 00:09:25.794 clat (usec): min=6770, max=33929, avg=17589.42, stdev=6219.76 00:09:25.794 lat (usec): min=6782, max=33937, avg=17724.42, stdev=6268.90 00:09:25.794 clat percentiles (usec): 00:09:25.794 | 1.00th=[ 7308], 5.00th=[ 9503], 10.00th=[12387], 20.00th=[13173], 00:09:25.794 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14353], 60.00th=[17433], 00:09:25.794 | 70.00th=[22152], 80.00th=[23200], 90.00th=[26084], 95.00th=[30278], 00:09:25.794 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:25.794 | 99.99th=[33817] 00:09:25.794 bw ( KiB/s): min=14688, max=16384, per=21.14%, avg=15536.00, stdev=1199.25, samples=2 00:09:25.794 iops : min= 3672, max= 4096, avg=3884.00, stdev=299.81, samples=2 00:09:25.794 lat (msec) : 10=4.16%, 20=76.56%, 50=19.28% 00:09:25.794 cpu : usr=2.98%, sys=6.06%, ctx=446, majf=0, minf=1 00:09:25.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:25.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.794 issued rwts: total=3584,4011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.794 00:09:25.794 Run status group 0 (all jobs): 00:09:25.794 READ: bw=69.5MiB/s (72.9MB/s), 11.9MiB/s-23.9MiB/s (12.5MB/s-25.0MB/s), io=70.0MiB (73.4MB), run=1004-1007msec 00:09:25.794 WRITE: bw=71.8MiB/s (75.3MB/s), 12.0MiB/s-24.3MiB/s (12.6MB/s-25.5MB/s), io=72.3MiB (75.8MB), run=1004-1007msec 00:09:25.794 00:09:25.794 Disk stats (read/write): 00:09:25.794 nvme0n1: ios=4783/5120, merge=0/0, ticks=52208/46833, in_queue=99041, util=82.15% 00:09:25.794 nvme0n2: ios=4528/4608, merge=0/0, ticks=13544/11937, in_queue=25481, util=89.21% 00:09:25.794 nvme0n3: ios=2070/2487, merge=0/0, ticks=15104/12091, in_queue=27195, util=95.23% 00:09:25.794 nvme0n4: ios=2799/3072, merge=0/0, ticks=20772/27577, in_queue=48349, util=99.56% 00:09:25.794 04:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:25.794 [global] 00:09:25.794 thread=1 00:09:25.794 invalidate=1 00:09:25.794 rw=randwrite 00:09:25.794 time_based=1 00:09:25.794 runtime=1 00:09:25.794 ioengine=libaio 00:09:25.794 direct=1 00:09:25.794 bs=4096 00:09:25.794 iodepth=128 00:09:25.794 norandommap=0 00:09:25.794 numjobs=1 00:09:25.794 00:09:25.794 verify_dump=1 00:09:25.794 verify_backlog=512 00:09:25.794 verify_state_save=0 00:09:25.794 do_verify=1 00:09:25.794 verify=crc32c-intel 00:09:25.794 [job0] 00:09:25.794 filename=/dev/nvme0n1 00:09:25.794 [job1] 00:09:25.794 filename=/dev/nvme0n2 00:09:25.794 [job2] 00:09:25.794 filename=/dev/nvme0n3 00:09:25.794 [job3] 00:09:25.794 filename=/dev/nvme0n4 00:09:25.794 Could not set queue depth (nvme0n1) 00:09:25.794 Could not set queue depth (nvme0n2) 00:09:25.794 Could not set queue depth (nvme0n3) 00:09:25.794 Could not set queue depth (nvme0n4) 00:09:26.052 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.052 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.052 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.052 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.052 fio-3.35 00:09:26.052 Starting 4 threads 00:09:27.426 00:09:27.426 job0: (groupid=0, jobs=1): err= 0: pid=516268: Tue Dec 10 04:46:18 2024 00:09:27.426 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:27.426 slat (nsec): min=1604, max=13556k, avg=137151.25, stdev=796338.60 00:09:27.426 clat (usec): min=8135, max=97811, avg=18959.33, stdev=16728.78 00:09:27.426 lat (usec): min=8142, max=97866, avg=19096.48, stdev=16833.10 00:09:27.426 clat percentiles (usec): 00:09:27.426 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:09:27.426 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[15270], 00:09:27.426 | 70.00th=[17695], 80.00th=[18220], 90.00th=[23725], 95.00th=[73925], 00:09:27.426 | 99.00th=[87557], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:09:27.426 | 99.99th=[98042] 00:09:27.426 write: IOPS=2848, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1006msec); 0 zone resets 00:09:27.426 slat (usec): min=2, max=25552, avg=219.30, stdev=1407.29 00:09:27.426 clat (msec): min=5, max=113, avg=26.06, stdev=23.80 00:09:27.426 lat (msec): min=6, max=113, avg=26.28, stdev=23.96 00:09:27.426 clat percentiles (msec): 00:09:27.426 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:09:27.426 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 20], 00:09:27.426 | 70.00th=[ 23], 80.00th=[ 40], 90.00th=[ 62], 95.00th=[ 78], 00:09:27.426 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 113], 99.95th=[ 113], 00:09:27.426 | 99.99th=[ 113] 00:09:27.426 bw ( KiB/s): min= 6096, max=15816, per=15.34%, avg=10956.00, stdev=6873.08, samples=2 00:09:27.426 iops : min= 1524, max= 3954, avg=2739.00, stdev=1718.27, samples=2 00:09:27.426 lat (msec) : 10=5.68%, 20=67.84%, 50=15.08%, 100=9.66%, 250=1.75% 00:09:27.426 cpu : usr=2.89%, sys=3.78%, ctx=281, majf=0, minf=1 00:09:27.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:27.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.426 issued rwts: total=2560,2866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.426 job1: (groupid=0, jobs=1): err= 0: pid=516269: Tue Dec 10 04:46:18 2024 00:09:27.426 read: IOPS=5399, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1005msec) 00:09:27.426 slat (nsec): min=1296, max=10563k, avg=87626.69, stdev=620482.94 00:09:27.426 clat (usec): min=279, max=39375, avg=11063.53, stdev=4686.65 00:09:27.426 lat (usec): min=287, max=39381, avg=11151.16, stdev=4724.96 00:09:27.426 clat percentiles (usec): 00:09:27.426 | 1.00th=[ 791], 5.00th=[ 6783], 10.00th=[ 7570], 20.00th=[ 8586], 00:09:27.426 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:09:27.426 | 70.00th=[11076], 80.00th=[12649], 90.00th=[15795], 95.00th=[17957], 00:09:27.426 | 99.00th=[30540], 99.50th=[37487], 99.90th=[38536], 99.95th=[39584], 00:09:27.426 | 99.99th=[39584] 00:09:27.426 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:27.426 slat (usec): min=2, max=31818, avg=84.85, stdev=633.90 00:09:27.426 clat (usec): min=1682, max=90468, avg=11950.23, stdev=11525.64 00:09:27.426 lat (usec): min=1696, max=90482, avg=12035.08, stdev=11587.53 00:09:27.426 clat percentiles (usec): 00:09:27.426 | 1.00th=[ 3392], 5.00th=[ 5014], 10.00th=[ 6456], 20.00th=[ 8094], 00:09:27.426 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10028], 00:09:27.426 | 70.00th=[10159], 80.00th=[10552], 90.00th=[12125], 95.00th=[33817], 00:09:27.426 | 99.00th=[81265], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:09:27.426 | 99.99th=[90702] 00:09:27.426 bw ( KiB/s): min=20480, max=24576, per=31.55%, avg=22528.00, stdev=2896.31, samples=2 00:09:27.426 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:27.426 lat (usec) : 500=0.02%, 750=0.33%, 1000=0.42% 00:09:27.426 lat (msec) : 2=0.08%, 4=1.27%, 10=48.16%, 20=44.28%, 50=4.23% 00:09:27.426 lat (msec) : 100=1.21% 00:09:27.426 cpu : usr=4.68%, sys=6.37%, ctx=668, majf=0, minf=1 00:09:27.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:27.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.426 issued rwts: total=5426,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.426 job2: (groupid=0, jobs=1): err= 0: pid=516270: Tue Dec 10 04:46:18 2024 00:09:27.426 read: IOPS=5015, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1004msec) 00:09:27.426 slat (nsec): min=1400, max=18710k, avg=94813.46, stdev=628673.43 00:09:27.426 clat (usec): min=461, max=31541, avg=12804.04, stdev=3399.98 00:09:27.426 lat (usec): min=3144, max=31546, avg=12898.85, stdev=3419.10 00:09:27.426 clat percentiles (usec): 00:09:27.426 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[11600], 00:09:27.426 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:09:27.426 | 70.00th=[13304], 80.00th=[14091], 90.00th=[14746], 95.00th=[17433], 00:09:27.426 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:09:27.426 | 99.99th=[31589] 00:09:27.426 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:27.426 slat (usec): min=2, max=11710, avg=82.57, stdev=552.78 00:09:27.426 clat (usec): min=690, max=31673, avg=12288.78, stdev=4460.26 00:09:27.426 lat (usec): min=719, max=31687, avg=12371.35, stdev=4494.04 00:09:27.426 clat percentiles (usec): 00:09:27.426 | 1.00th=[ 2507], 5.00th=[ 6456], 10.00th=[ 8291], 20.00th=[ 9765], 00:09:27.426 | 30.00th=[11076], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:09:27.426 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15008], 95.00th=[22414], 00:09:27.426 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:09:27.426 | 99.99th=[31589] 00:09:27.426 bw ( KiB/s): min=20480, max=20480, per=28.68%, avg=20480.00, stdev= 0.00, samples=2 00:09:27.426 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:27.426 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.05% 00:09:27.426 lat (msec) : 2=0.24%, 4=1.04%, 10=13.79%, 20=80.06%, 50=4.78% 00:09:27.426 cpu : usr=3.09%, sys=5.98%, ctx=489, majf=0, minf=1 00:09:27.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:27.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.426 issued rwts: total=5036,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.426 job3: (groupid=0, jobs=1): err= 0: pid=516271: Tue Dec 10 04:46:18 2024 00:09:27.426 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:09:27.426 slat (nsec): min=1678, max=23350k, avg=121719.85, stdev=950146.84 00:09:27.426 clat (usec): min=5139, max=57556, avg=14900.82, stdev=5889.34 00:09:27.426 lat (usec): min=5146, max=57572, avg=15022.54, stdev=5969.14 00:09:27.427 clat percentiles (usec): 00:09:27.427 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[11994], 00:09:27.427 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:09:27.427 | 70.00th=[14222], 80.00th=[17433], 90.00th=[22938], 95.00th=[30016], 00:09:27.427 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:09:27.427 | 99.99th=[57410] 00:09:27.427 write: IOPS=4370, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1010msec); 0 zone resets 00:09:27.427 slat (usec): min=2, max=9857, avg=106.83, stdev=567.16 00:09:27.427 clat (usec): min=1950, max=36091, avg=15148.79, stdev=6354.80 00:09:27.427 lat (usec): min=1962, max=36104, avg=15255.62, stdev=6409.54 00:09:27.427 clat percentiles (usec): 00:09:27.427 | 1.00th=[ 4113], 5.00th=[ 7308], 10.00th=[ 9241], 20.00th=[10814], 00:09:27.427 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12387], 60.00th=[13173], 00:09:27.427 | 70.00th=[17695], 80.00th=[23200], 90.00th=[25297], 95.00th=[26608], 00:09:27.427 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[33817], 00:09:27.427 | 99.99th=[35914] 00:09:27.427 bw ( KiB/s): min=16920, max=17376, per=24.01%, avg=17148.00, stdev=322.44, samples=2 00:09:27.427 iops : min= 4230, max= 4344, avg=4287.00, stdev=80.61, samples=2 00:09:27.427 lat (msec) : 2=0.02%, 4=0.33%, 10=11.96%, 20=66.76%, 50=20.92% 00:09:27.427 lat (msec) : 100=0.01% 00:09:27.427 cpu : usr=3.77%, sys=5.85%, ctx=431, majf=0, minf=1 00:09:27.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:27.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.427 issued rwts: total=4096,4414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.427 00:09:27.427 Run status group 0 (all jobs): 00:09:27.427 READ: bw=66.2MiB/s (69.4MB/s), 9.94MiB/s-21.1MiB/s (10.4MB/s-22.1MB/s), io=66.9MiB (70.1MB), run=1004-1010msec 00:09:27.427 WRITE: bw=69.7MiB/s (73.1MB/s), 11.1MiB/s-21.9MiB/s (11.7MB/s-23.0MB/s), io=70.4MiB (73.9MB), run=1004-1010msec 00:09:27.427 00:09:27.427 Disk stats (read/write): 00:09:27.427 nvme0n1: ios=2317/2560, merge=0/0, ticks=9357/19867, in_queue=29224, util=97.09% 00:09:27.427 nvme0n2: ios=4211/4608, merge=0/0, ticks=46309/53172, in_queue=99481, util=91.47% 00:09:27.427 nvme0n3: ios=3991/4096, merge=0/0, ticks=32695/35777, in_queue=68472, util=88.62% 00:09:27.427 nvme0n4: ios=3131/3503, merge=0/0, ticks=46243/52822, in_queue=99065, util=97.57% 00:09:27.427 04:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:27.427 04:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=516491 00:09:27.427 04:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:27.427 04:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:27.427 [global] 00:09:27.427 thread=1 00:09:27.427 invalidate=1 00:09:27.427 rw=read 00:09:27.427 time_based=1 00:09:27.427 runtime=10 00:09:27.427 ioengine=libaio 00:09:27.427 direct=1 00:09:27.427 bs=4096 00:09:27.427 iodepth=1 00:09:27.427 norandommap=1 00:09:27.427 numjobs=1 00:09:27.427 00:09:27.427 [job0] 00:09:27.427 filename=/dev/nvme0n1 00:09:27.427 [job1] 00:09:27.427 filename=/dev/nvme0n2 00:09:27.427 [job2] 00:09:27.427 filename=/dev/nvme0n3 00:09:27.427 [job3] 00:09:27.427 filename=/dev/nvme0n4 00:09:27.427 Could not set queue depth (nvme0n1) 00:09:27.427 Could not set queue depth (nvme0n2) 00:09:27.427 Could not set queue depth (nvme0n3) 00:09:27.427 Could not set queue depth (nvme0n4) 00:09:27.684 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.684 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.684 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.684 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.684 fio-3.35 00:09:27.684 Starting 4 threads 00:09:30.211 04:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:30.469 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:09:30.469 fio: pid=516639, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.469 04:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:30.728 04:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.728 04:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:30.728 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=311296, buflen=4096 00:09:30.728 fio: pid=516635, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.728 04:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.728 04:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:30.728 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=30478336, buflen=4096 00:09:30.728 fio: pid=516631, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.986 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.986 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:30.986 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3444736, buflen=4096 00:09:30.986 fio: pid=516634, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:30.986 00:09:30.986 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=516631: Tue Dec 10 04:46:22 2024 00:09:30.986 read: IOPS=2424, BW=9695KiB/s (9928kB/s)(29.1MiB/3070msec) 00:09:30.986 slat (usec): min=7, max=17742, avg=11.70, stdev=229.36 00:09:30.986 clat (usec): min=153, max=42051, avg=396.04, stdev=2867.14 00:09:30.986 lat (usec): min=162, max=58999, avg=407.75, stdev=2927.26 00:09:30.986 clat percentiles (usec): 00:09:30.986 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:09:30.986 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:09:30.986 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 219], 00:09:30.986 | 99.00th=[ 265], 99.50th=[ 4113], 99.90th=[41157], 99.95th=[41157], 00:09:30.986 | 99.99th=[42206] 00:09:30.986 bw ( KiB/s): min= 108, max=19944, per=96.96%, avg=9912.67, stdev=10726.68, samples=6 00:09:30.986 iops : min= 27, max= 4984, avg=2477.83, stdev=2681.30, samples=6 00:09:30.986 lat (usec) : 250=98.79%, 500=0.66%, 750=0.03% 00:09:30.986 lat (msec) : 10=0.01%, 50=0.50% 00:09:30.986 cpu : usr=1.24%, sys=3.88%, ctx=7445, majf=0, minf=1 00:09:30.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.986 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.986 issued rwts: total=7442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.986 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=516634: Tue Dec 10 04:46:22 2024 00:09:30.986 read: IOPS=255, BW=1020KiB/s (1045kB/s)(3364KiB/3297msec) 00:09:30.986 slat (usec): min=3, max=15578, avg=53.10, stdev=758.17 00:09:30.986 clat (usec): min=173, max=42466, avg=3855.32, stdev=11589.49 00:09:30.986 lat (usec): min=180, max=42473, avg=3908.46, stdev=11603.53 00:09:30.986 clat percentiles (usec): 00:09:30.986 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:09:30.986 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 231], 00:09:30.986 | 70.00th=[ 247], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[41157], 00:09:30.986 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:30.986 | 99.99th=[42206] 00:09:30.986 bw ( KiB/s): min= 104, max= 4462, per=8.63%, avg=882.33, stdev=1754.61, samples=6 00:09:30.986 iops : min= 26, max= 1115, avg=220.50, stdev=438.45, samples=6 00:09:30.986 lat (usec) : 250=72.33%, 500=18.29%, 750=0.36% 00:09:30.986 lat (msec) : 20=0.12%, 50=8.79% 00:09:30.986 cpu : usr=0.15%, sys=0.18%, ctx=849, majf=0, minf=2 00:09:30.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.986 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.986 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.986 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=516635: Tue Dec 10 04:46:22 2024 00:09:30.986 read: IOPS=26, BW=106KiB/s (109kB/s)(304KiB/2861msec) 00:09:30.986 slat (nsec): min=8399, max=68258, avg=19243.88, stdev=8251.94 00:09:30.986 clat (usec): min=336, max=42231, avg=37257.18, stdev=11813.73 00:09:30.986 lat (usec): min=358, max=42254, avg=37276.35, stdev=11811.54 00:09:30.986 clat percentiles (usec): 00:09:30.986 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[40633], 20.00th=[40633], 00:09:30.986 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:30.986 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:30.986 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:30.986 | 99.99th=[42206] 00:09:30.986 bw ( KiB/s): min= 96, max= 143, per=1.05%, avg=107.00, stdev=20.42, samples=5 00:09:30.986 iops : min= 24, max= 35, avg=26.60, stdev= 4.77, samples=5 00:09:30.986 lat (usec) : 500=7.79%, 750=1.30% 00:09:30.986 lat (msec) : 50=89.61% 00:09:30.986 cpu : usr=0.10%, sys=0.00%, ctx=78, majf=0, minf=2 00:09:30.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.986 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.986 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.986 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=516639: Tue Dec 10 04:46:22 2024 00:09:30.986 read: IOPS=25, BW=102KiB/s (105kB/s)(272KiB/2663msec) 00:09:30.986 slat (nsec): min=8688, max=45416, avg=22655.35, stdev=4218.87 00:09:30.987 clat (usec): min=289, max=42078, avg=38814.45, stdev=9686.35 00:09:30.987 lat (usec): min=312, max=42102, avg=38837.12, stdev=9684.47 00:09:30.987 clat percentiles (usec): 00:09:30.987 | 1.00th=[ 289], 5.00th=[ 490], 10.00th=[40633], 20.00th=[41157], 00:09:30.987 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:30.987 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:30.987 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:30.987 | 99.99th=[42206] 00:09:30.987 bw ( KiB/s): min= 96, max= 112, per=1.00%, avg=102.40, stdev= 8.76, samples=5 00:09:30.987 iops : min= 24, max= 28, avg=25.60, stdev= 2.19, samples=5 00:09:30.987 lat (usec) : 500=5.80% 00:09:30.987 lat (msec) : 50=92.75% 00:09:30.987 cpu : usr=0.00%, sys=0.11%, ctx=70, majf=0, minf=2 00:09:30.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.987 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.987 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.987 00:09:30.987 Run status group 0 (all jobs): 00:09:30.987 READ: bw=9.98MiB/s (10.5MB/s), 102KiB/s-9695KiB/s (105kB/s-9928kB/s), io=32.9MiB (34.5MB), run=2663-3297msec 00:09:30.987 00:09:30.987 Disk stats (read/write): 00:09:30.987 nvme0n1: ios=7483/0, merge=0/0, ticks=3960/0, in_queue=3960, util=98.27% 00:09:30.987 nvme0n2: ios=722/0, merge=0/0, ticks=3011/0, in_queue=3011, util=93.91% 00:09:30.987 nvme0n3: ios=75/0, merge=0/0, ticks=2792/0, in_queue=2792, util=96.17% 00:09:30.987 nvme0n4: ios=65/0, merge=0/0, ticks=2518/0, in_queue=2518, util=96.35% 00:09:31.244 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.244 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:31.502 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.502 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:31.759 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.759 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:31.759 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.759 04:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:32.017 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:32.017 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 516491 00:09:32.017 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:32.017 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:32.275 nvmf hotplug test: fio failed as expected 00:09:32.275 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.533 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.533 rmmod nvme_tcp 00:09:32.534 rmmod nvme_fabrics 00:09:32.534 rmmod nvme_keyring 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 513625 ']' 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 513625 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 513625 ']' 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 513625 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513625 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513625' 00:09:32.534 killing process with pid 513625 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 513625 00:09:32.534 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 513625 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.793 04:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.701 00:09:34.701 real 0m26.939s 00:09:34.701 user 1m48.122s 00:09:34.701 sys 0m8.524s 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.701 ************************************ 00:09:34.701 END TEST nvmf_fio_target 00:09:34.701 ************************************ 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.701 04:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.960 ************************************ 00:09:34.960 START TEST nvmf_bdevio 00:09:34.960 ************************************ 00:09:34.960 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:34.960 * Looking for test storage... 00:09:34.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.960 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.960 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.960 04:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.960 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.961 --rc genhtml_branch_coverage=1 00:09:34.961 --rc genhtml_function_coverage=1 00:09:34.961 --rc genhtml_legend=1 00:09:34.961 --rc geninfo_all_blocks=1 00:09:34.961 --rc geninfo_unexecuted_blocks=1 00:09:34.961 00:09:34.961 ' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.961 --rc genhtml_branch_coverage=1 00:09:34.961 --rc genhtml_function_coverage=1 00:09:34.961 --rc genhtml_legend=1 00:09:34.961 --rc geninfo_all_blocks=1 00:09:34.961 --rc geninfo_unexecuted_blocks=1 00:09:34.961 00:09:34.961 ' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.961 --rc genhtml_branch_coverage=1 00:09:34.961 --rc genhtml_function_coverage=1 00:09:34.961 --rc genhtml_legend=1 00:09:34.961 --rc geninfo_all_blocks=1 00:09:34.961 --rc geninfo_unexecuted_blocks=1 00:09:34.961 00:09:34.961 ' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.961 --rc genhtml_branch_coverage=1 00:09:34.961 --rc genhtml_function_coverage=1 00:09:34.961 --rc genhtml_legend=1 00:09:34.961 --rc geninfo_all_blocks=1 00:09:34.961 --rc geninfo_unexecuted_blocks=1 00:09:34.961 00:09:34.961 ' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.961 04:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:41.527 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:41.527 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.527 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:41.528 Found net devices under 0000:af:00.0: cvl_0_0 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:41.528 Found net devices under 0000:af:00.1: cvl_0_1 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:09:41.528 00:09:41.528 --- 10.0.0.2 ping statistics --- 00:09:41.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.528 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:41.528 00:09:41.528 --- 10.0.0.1 ping statistics --- 00:09:41.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.528 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.528 04:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=521017 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 521017 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 521017 ']' 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 [2024-12-10 04:46:32.050083] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:41.528 [2024-12-10 04:46:32.050126] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.528 [2024-12-10 04:46:32.127972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.528 [2024-12-10 04:46:32.168664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.528 [2024-12-10 04:46:32.168702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.528 [2024-12-10 04:46:32.168709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.528 [2024-12-10 04:46:32.168715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.528 [2024-12-10 04:46:32.168720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.528 [2024-12-10 04:46:32.170112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.528 [2024-12-10 04:46:32.170219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:41.528 [2024-12-10 04:46:32.170325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.528 [2024-12-10 04:46:32.170326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 [2024-12-10 04:46:32.307404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 Malloc0 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.528 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 [2024-12-10 04:46:32.364217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.529 { 00:09:41.529 "params": { 00:09:41.529 "name": "Nvme$subsystem", 00:09:41.529 "trtype": "$TEST_TRANSPORT", 00:09:41.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.529 "adrfam": "ipv4", 00:09:41.529 "trsvcid": "$NVMF_PORT", 00:09:41.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.529 "hdgst": ${hdgst:-false}, 00:09:41.529 "ddgst": ${ddgst:-false} 00:09:41.529 }, 00:09:41.529 "method": "bdev_nvme_attach_controller" 00:09:41.529 } 00:09:41.529 EOF 00:09:41.529 )") 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:41.529 04:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.529 "params": { 00:09:41.529 "name": "Nvme1", 00:09:41.529 "trtype": "tcp", 00:09:41.529 "traddr": "10.0.0.2", 00:09:41.529 "adrfam": "ipv4", 00:09:41.529 "trsvcid": "4420", 00:09:41.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.529 "hdgst": false, 00:09:41.529 "ddgst": false 00:09:41.529 }, 00:09:41.529 "method": "bdev_nvme_attach_controller" 00:09:41.529 }' 00:09:41.529 [2024-12-10 04:46:32.412237] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:41.529 [2024-12-10 04:46:32.412278] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521046 ] 00:09:41.529 [2024-12-10 04:46:32.492706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.529 [2024-12-10 04:46:32.535062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.529 [2024-12-10 04:46:32.535173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.529 [2024-12-10 04:46:32.535181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.786 I/O targets: 00:09:41.786 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:41.786 00:09:41.786 00:09:41.786 CUnit - A unit testing framework for C - Version 2.1-3 00:09:41.786 http://cunit.sourceforge.net/ 00:09:41.786 00:09:41.786 00:09:41.786 Suite: bdevio tests on: Nvme1n1 00:09:41.786 Test: blockdev write read block ...passed 00:09:41.786 Test: blockdev write zeroes read block ...passed 00:09:41.786 Test: blockdev write zeroes read no split ...passed 00:09:41.786 Test: blockdev write zeroes read split ...passed 00:09:41.786 Test: blockdev write zeroes read split partial ...passed 00:09:41.786 Test: blockdev reset ...[2024-12-10 04:46:32.850115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:41.786 [2024-12-10 04:46:32.850181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92f4f0 (9): Bad file descriptor 00:09:41.787 [2024-12-10 04:46:32.865763] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:41.787 passed 00:09:41.787 Test: blockdev write read 8 blocks ...passed 00:09:41.787 Test: blockdev write read size > 128k ...passed 00:09:41.787 Test: blockdev write read invalid size ...passed 00:09:41.787 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.787 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.787 Test: blockdev write read max offset ...passed 00:09:42.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.044 Test: blockdev writev readv 8 blocks ...passed 00:09:42.044 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.044 Test: blockdev writev readv block ...passed 00:09:42.044 Test: blockdev writev readv size > 128k ...passed 00:09:42.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.044 Test: blockdev comparev and writev ...[2024-12-10 04:46:33.075985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.044 [2024-12-10 04:46:33.076012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:42.044 [2024-12-10 04:46:33.076026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.044 [2024-12-10 04:46:33.076041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:42.044 [2024-12-10 04:46:33.076294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.044 [2024-12-10 04:46:33.076305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:42.044 [2024-12-10 04:46:33.076316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.044 [2024-12-10 04:46:33.076323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:42.044 [2024-12-10 04:46:33.076549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.045 [2024-12-10 04:46:33.076559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:42.045 [2024-12-10 04:46:33.076570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.045 [2024-12-10 04:46:33.076576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:42.045 [2024-12-10 04:46:33.076791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.045 [2024-12-10 04:46:33.076800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:42.045 [2024-12-10 04:46:33.076811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.045 [2024-12-10 04:46:33.076818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:42.045 passed 00:09:42.045 Test: blockdev nvme passthru rw ...passed 00:09:42.045 Test: blockdev nvme passthru vendor specific ...[2024-12-10 04:46:33.160518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.045 [2024-12-10 04:46:33.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:42.045 [2024-12-10 04:46:33.160634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.045 [2024-12-10 04:46:33.160642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:42.045 [2024-12-10 04:46:33.160735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.045 [2024-12-10 04:46:33.160744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:42.045 [2024-12-10 04:46:33.160841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.045 [2024-12-10 04:46:33.160854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:42.045 passed 00:09:42.045 Test: blockdev nvme admin passthru ...passed 00:09:42.302 Test: blockdev copy ...passed 00:09:42.302 00:09:42.302 Run Summary: Type Total Ran Passed Failed Inactive 00:09:42.302 suites 1 1 n/a 0 0 00:09:42.302 tests 23 23 23 0 0 00:09:42.302 asserts 152 152 152 0 n/a 00:09:42.302 00:09:42.302 Elapsed time = 0.957 seconds 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.302 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.302 rmmod nvme_tcp 00:09:42.302 rmmod nvme_fabrics 00:09:42.302 rmmod nvme_keyring 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 521017 ']' 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 521017 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 521017 ']' 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 521017 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:42.303 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 521017 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 521017' 00:09:42.561 killing process with pid 521017 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 521017 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 521017 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.561 04:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.096 00:09:45.096 real 0m9.883s 00:09:45.096 user 0m9.569s 00:09:45.096 sys 0m4.950s 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.096 ************************************ 00:09:45.096 END TEST nvmf_bdevio 00:09:45.096 ************************************ 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:45.096 00:09:45.096 real 4m35.764s 00:09:45.096 user 10m22.178s 00:09:45.096 sys 1m37.106s 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.096 ************************************ 00:09:45.096 END TEST nvmf_target_core 00:09:45.096 ************************************ 00:09:45.096 04:46:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:45.096 04:46:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.096 04:46:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.096 04:46:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.096 ************************************ 00:09:45.096 START TEST nvmf_target_extra 00:09:45.096 ************************************ 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:45.096 * Looking for test storage... 00:09:45.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.096 04:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.096 --rc genhtml_branch_coverage=1 00:09:45.096 --rc genhtml_function_coverage=1 00:09:45.096 --rc genhtml_legend=1 00:09:45.096 --rc geninfo_all_blocks=1 00:09:45.096 --rc geninfo_unexecuted_blocks=1 00:09:45.096 00:09:45.096 ' 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.096 --rc genhtml_branch_coverage=1 00:09:45.096 --rc genhtml_function_coverage=1 00:09:45.096 --rc genhtml_legend=1 00:09:45.096 --rc geninfo_all_blocks=1 00:09:45.096 --rc geninfo_unexecuted_blocks=1 00:09:45.096 00:09:45.096 ' 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.096 --rc genhtml_branch_coverage=1 00:09:45.096 --rc genhtml_function_coverage=1 00:09:45.096 --rc genhtml_legend=1 00:09:45.096 --rc geninfo_all_blocks=1 00:09:45.096 --rc geninfo_unexecuted_blocks=1 00:09:45.096 00:09:45.096 ' 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.096 --rc genhtml_branch_coverage=1 00:09:45.096 --rc genhtml_function_coverage=1 00:09:45.096 --rc genhtml_legend=1 00:09:45.096 --rc geninfo_all_blocks=1 00:09:45.096 --rc geninfo_unexecuted_blocks=1 00:09:45.096 00:09:45.096 ' 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.096 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:45.097 ************************************ 00:09:45.097 START TEST nvmf_example 00:09:45.097 ************************************ 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.097 * Looking for test storage... 00:09:45.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.097 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.357 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.357 --rc genhtml_branch_coverage=1 00:09:45.357 --rc genhtml_function_coverage=1 00:09:45.357 --rc genhtml_legend=1 00:09:45.357 --rc geninfo_all_blocks=1 00:09:45.357 --rc geninfo_unexecuted_blocks=1 00:09:45.357 00:09:45.357 ' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.358 --rc genhtml_branch_coverage=1 00:09:45.358 --rc genhtml_function_coverage=1 00:09:45.358 --rc genhtml_legend=1 00:09:45.358 --rc geninfo_all_blocks=1 00:09:45.358 --rc geninfo_unexecuted_blocks=1 00:09:45.358 00:09:45.358 ' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.358 --rc genhtml_branch_coverage=1 00:09:45.358 --rc genhtml_function_coverage=1 00:09:45.358 --rc genhtml_legend=1 00:09:45.358 --rc geninfo_all_blocks=1 00:09:45.358 --rc geninfo_unexecuted_blocks=1 00:09:45.358 00:09:45.358 ' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.358 --rc genhtml_branch_coverage=1 00:09:45.358 --rc genhtml_function_coverage=1 00:09:45.358 --rc genhtml_legend=1 00:09:45.358 --rc geninfo_all_blocks=1 00:09:45.358 --rc geninfo_unexecuted_blocks=1 00:09:45.358 00:09:45.358 ' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.358 04:46:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:51.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:51.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:51.926 Found net devices under 0000:af:00.0: cvl_0_0 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.926 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:51.927 Found net devices under 0000:af:00.1: cvl_0_1 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.927 04:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:09:51.927 00:09:51.927 --- 10.0.0.2 ping statistics --- 00:09:51.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.927 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:51.927 00:09:51.927 --- 10.0.0.1 ping statistics --- 00:09:51.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.927 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=524794 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 524794 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 524794 ']' 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.927 04:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:52.185 04:46:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:04.380 Initializing NVMe Controllers 00:10:04.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:04.380 Initialization complete. Launching workers. 00:10:04.380 ======================================================== 00:10:04.380 Latency(us) 00:10:04.380 Device Information : IOPS MiB/s Average min max 00:10:04.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18080.80 70.63 3539.10 679.08 15477.88 00:10:04.380 ======================================================== 00:10:04.380 Total : 18080.80 70.63 3539.10 679.08 15477.88 00:10:04.380 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.380 rmmod nvme_tcp 00:10:04.380 rmmod nvme_fabrics 00:10:04.380 rmmod nvme_keyring 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 524794 ']' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 524794 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 524794 ']' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 524794 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524794 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524794' 00:10:04.380 killing process with pid 524794 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 524794 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 524794 00:10:04.380 nvmf threads initialize successfully 00:10:04.380 bdev subsystem init successfully 00:10:04.380 created a nvmf target service 00:10:04.380 create targets's poll groups done 00:10:04.380 all subsystems of target started 00:10:04.380 nvmf target is running 00:10:04.380 all subsystems of target stopped 00:10:04.380 destroy targets's poll groups done 00:10:04.380 destroyed the nvmf target service 00:10:04.380 bdev subsystem finish successfully 00:10:04.380 nvmf threads destroy successfully 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.380 04:46:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:05.092 00:10:05.092 real 0m19.823s 00:10:05.092 user 0m46.160s 00:10:05.092 sys 0m6.079s 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:05.092 ************************************ 00:10:05.092 END TEST nvmf_example 00:10:05.092 ************************************ 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:05.092 ************************************ 00:10:05.092 START TEST nvmf_filesystem 00:10:05.092 ************************************ 00:10:05.092 04:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:05.092 * Looking for test storage... 00:10:05.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.092 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.093 --rc genhtml_branch_coverage=1 00:10:05.093 --rc genhtml_function_coverage=1 00:10:05.093 --rc genhtml_legend=1 00:10:05.093 --rc geninfo_all_blocks=1 00:10:05.093 --rc geninfo_unexecuted_blocks=1 00:10:05.093 00:10:05.093 ' 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.093 --rc genhtml_branch_coverage=1 00:10:05.093 --rc genhtml_function_coverage=1 00:10:05.093 --rc genhtml_legend=1 00:10:05.093 --rc geninfo_all_blocks=1 00:10:05.093 --rc geninfo_unexecuted_blocks=1 00:10:05.093 00:10:05.093 ' 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.093 --rc genhtml_branch_coverage=1 00:10:05.093 --rc genhtml_function_coverage=1 00:10:05.093 --rc genhtml_legend=1 00:10:05.093 --rc geninfo_all_blocks=1 00:10:05.093 --rc geninfo_unexecuted_blocks=1 00:10:05.093 00:10:05.093 ' 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.093 --rc genhtml_branch_coverage=1 00:10:05.093 --rc genhtml_function_coverage=1 00:10:05.093 --rc genhtml_legend=1 00:10:05.093 --rc geninfo_all_blocks=1 00:10:05.093 --rc geninfo_unexecuted_blocks=1 00:10:05.093 00:10:05.093 ' 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:05.093 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:05.094 #define SPDK_CONFIG_H 00:10:05.094 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:05.094 #define SPDK_CONFIG_APPS 1 00:10:05.094 #define SPDK_CONFIG_ARCH native 00:10:05.094 #undef SPDK_CONFIG_ASAN 00:10:05.094 #undef SPDK_CONFIG_AVAHI 00:10:05.094 #undef SPDK_CONFIG_CET 00:10:05.094 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:05.094 #define SPDK_CONFIG_COVERAGE 1 00:10:05.094 #define SPDK_CONFIG_CROSS_PREFIX 00:10:05.094 #undef SPDK_CONFIG_CRYPTO 00:10:05.094 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:05.094 #undef SPDK_CONFIG_CUSTOMOCF 00:10:05.094 #undef SPDK_CONFIG_DAOS 00:10:05.094 #define SPDK_CONFIG_DAOS_DIR 00:10:05.094 #define SPDK_CONFIG_DEBUG 1 00:10:05.094 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:05.094 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:05.094 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:05.094 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:05.094 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:05.094 #undef SPDK_CONFIG_DPDK_UADK 00:10:05.094 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:05.094 #define SPDK_CONFIG_EXAMPLES 1 00:10:05.094 #undef SPDK_CONFIG_FC 00:10:05.094 #define SPDK_CONFIG_FC_PATH 00:10:05.094 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:05.094 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:05.094 #define SPDK_CONFIG_FSDEV 1 00:10:05.094 #undef SPDK_CONFIG_FUSE 00:10:05.094 #undef SPDK_CONFIG_FUZZER 00:10:05.094 #define SPDK_CONFIG_FUZZER_LIB 00:10:05.094 #undef SPDK_CONFIG_GOLANG 00:10:05.094 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:05.094 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:05.094 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:05.094 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:05.094 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:05.094 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:05.094 #undef SPDK_CONFIG_HAVE_LZ4 00:10:05.094 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:05.094 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:05.094 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:05.094 #define SPDK_CONFIG_IDXD 1 00:10:05.094 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:05.094 #undef SPDK_CONFIG_IPSEC_MB 00:10:05.094 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:05.094 #define SPDK_CONFIG_ISAL 1 00:10:05.094 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:05.094 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:05.094 #define SPDK_CONFIG_LIBDIR 00:10:05.094 #undef SPDK_CONFIG_LTO 00:10:05.094 #define SPDK_CONFIG_MAX_LCORES 128 00:10:05.094 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:05.094 #define SPDK_CONFIG_NVME_CUSE 1 00:10:05.094 #undef SPDK_CONFIG_OCF 00:10:05.094 #define SPDK_CONFIG_OCF_PATH 00:10:05.094 #define SPDK_CONFIG_OPENSSL_PATH 00:10:05.094 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:05.094 #define SPDK_CONFIG_PGO_DIR 00:10:05.094 #undef SPDK_CONFIG_PGO_USE 00:10:05.094 #define SPDK_CONFIG_PREFIX /usr/local 00:10:05.094 #undef SPDK_CONFIG_RAID5F 00:10:05.094 #undef SPDK_CONFIG_RBD 00:10:05.094 #define SPDK_CONFIG_RDMA 1 00:10:05.094 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:05.094 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:05.094 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:05.094 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:05.094 #define SPDK_CONFIG_SHARED 1 00:10:05.094 #undef SPDK_CONFIG_SMA 00:10:05.094 #define SPDK_CONFIG_TESTS 1 00:10:05.094 #undef SPDK_CONFIG_TSAN 00:10:05.094 #define SPDK_CONFIG_UBLK 1 00:10:05.094 #define SPDK_CONFIG_UBSAN 1 00:10:05.094 #undef SPDK_CONFIG_UNIT_TESTS 00:10:05.094 #undef SPDK_CONFIG_URING 00:10:05.094 #define SPDK_CONFIG_URING_PATH 00:10:05.094 #undef SPDK_CONFIG_URING_ZNS 00:10:05.094 #undef SPDK_CONFIG_USDT 00:10:05.094 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:05.094 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:05.094 #define SPDK_CONFIG_VFIO_USER 1 00:10:05.094 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:05.094 #define SPDK_CONFIG_VHOST 1 00:10:05.094 #define SPDK_CONFIG_VIRTIO 1 00:10:05.094 #undef SPDK_CONFIG_VTUNE 00:10:05.094 #define SPDK_CONFIG_VTUNE_DIR 00:10:05.094 #define SPDK_CONFIG_WERROR 1 00:10:05.094 #define SPDK_CONFIG_WPDK_DIR 00:10:05.094 #undef SPDK_CONFIG_XNVME 00:10:05.094 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:05.094 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:05.095 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:05.357 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:05.358 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 527154 ]] 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 527154 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Yeklwz 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Yeklwz/tests/target /tmp/spdk.Yeklwz 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88946913280 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11890290688 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144435200 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23007232 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344389120 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074212864 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.359 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:05.360 * Looking for test storage... 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88946913280 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=14104883200 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.360 --rc genhtml_branch_coverage=1 00:10:05.360 --rc genhtml_function_coverage=1 00:10:05.360 --rc genhtml_legend=1 00:10:05.360 --rc geninfo_all_blocks=1 00:10:05.360 --rc geninfo_unexecuted_blocks=1 00:10:05.360 00:10:05.360 ' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.360 --rc genhtml_branch_coverage=1 00:10:05.360 --rc genhtml_function_coverage=1 00:10:05.360 --rc genhtml_legend=1 00:10:05.360 --rc geninfo_all_blocks=1 00:10:05.360 --rc geninfo_unexecuted_blocks=1 00:10:05.360 00:10:05.360 ' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.360 --rc genhtml_branch_coverage=1 00:10:05.360 --rc genhtml_function_coverage=1 00:10:05.360 --rc genhtml_legend=1 00:10:05.360 --rc geninfo_all_blocks=1 00:10:05.360 --rc geninfo_unexecuted_blocks=1 00:10:05.360 00:10:05.360 ' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.360 --rc genhtml_branch_coverage=1 00:10:05.360 --rc genhtml_function_coverage=1 00:10:05.360 --rc genhtml_legend=1 00:10:05.360 --rc geninfo_all_blocks=1 00:10:05.360 --rc geninfo_unexecuted_blocks=1 00:10:05.360 00:10:05.360 ' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.360 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.361 04:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:11.929 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:11.929 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:11.929 Found net devices under 0000:af:00.0: cvl_0_0 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.929 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:11.930 Found net devices under 0000:af:00.1: cvl_0_1 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:10:11.930 00:10:11.930 --- 10.0.0.2 ping statistics --- 00:10:11.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.930 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:10:11.930 00:10:11.930 --- 10.0.0.1 ping statistics --- 00:10:11.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.930 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.930 ************************************ 00:10:11.930 START TEST nvmf_filesystem_no_in_capsule 00:10:11.930 ************************************ 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=530347 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 530347 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 530347 ']' 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.930 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.930 [2024-12-10 04:47:02.541325] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:11.930 [2024-12-10 04:47:02.541371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.930 [2024-12-10 04:47:02.619830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.931 [2024-12-10 04:47:02.660509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.931 [2024-12-10 04:47:02.660544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.931 [2024-12-10 04:47:02.660551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.931 [2024-12-10 04:47:02.660557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.931 [2024-12-10 04:47:02.660562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.931 [2024-12-10 04:47:02.661873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.931 [2024-12-10 04:47:02.661983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.931 [2024-12-10 04:47:02.662089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.931 [2024-12-10 04:47:02.662090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 [2024-12-10 04:47:02.794790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 [2024-12-10 04:47:02.957341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:11.931 { 00:10:11.931 "name": "Malloc1", 00:10:11.931 "aliases": [ 00:10:11.931 "7978841a-bc6f-4a44-89c4-140e1a013fc3" 00:10:11.931 ], 00:10:11.931 "product_name": "Malloc disk", 00:10:11.931 "block_size": 512, 00:10:11.931 "num_blocks": 1048576, 00:10:11.931 "uuid": "7978841a-bc6f-4a44-89c4-140e1a013fc3", 00:10:11.931 "assigned_rate_limits": { 00:10:11.931 "rw_ios_per_sec": 0, 00:10:11.931 "rw_mbytes_per_sec": 0, 00:10:11.931 "r_mbytes_per_sec": 0, 00:10:11.931 "w_mbytes_per_sec": 0 00:10:11.931 }, 00:10:11.931 "claimed": true, 00:10:11.931 "claim_type": "exclusive_write", 00:10:11.931 "zoned": false, 00:10:11.931 "supported_io_types": { 00:10:11.931 "read": true, 00:10:11.931 "write": true, 00:10:11.931 "unmap": true, 00:10:11.931 "flush": true, 00:10:11.931 "reset": true, 00:10:11.931 "nvme_admin": false, 00:10:11.931 "nvme_io": false, 00:10:11.931 "nvme_io_md": false, 00:10:11.931 "write_zeroes": true, 00:10:11.931 "zcopy": true, 00:10:11.931 "get_zone_info": false, 00:10:11.931 "zone_management": false, 00:10:11.931 "zone_append": false, 00:10:11.931 "compare": false, 00:10:11.931 "compare_and_write": false, 00:10:11.931 "abort": true, 00:10:11.931 "seek_hole": false, 00:10:11.931 "seek_data": false, 00:10:11.931 "copy": true, 00:10:11.931 "nvme_iov_md": false 00:10:11.931 }, 00:10:11.931 "memory_domains": [ 00:10:11.931 { 00:10:11.931 "dma_device_id": "system", 00:10:11.931 "dma_device_type": 1 00:10:11.931 }, 00:10:11.931 { 00:10:11.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.931 "dma_device_type": 2 00:10:11.931 } 00:10:11.931 ], 00:10:11.931 "driver_specific": {} 00:10:11.931 } 00:10:11.931 ]' 00:10:11.931 04:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:11.931 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:11.931 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:12.188 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:12.188 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:12.188 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:12.188 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:12.188 04:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.123 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.123 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:13.123 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.123 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:13.123 04:47:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:15.661 04:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:16.228 04:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.162 ************************************ 00:10:17.162 START TEST filesystem_ext4 00:10:17.162 ************************************ 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.162 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:17.163 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:17.163 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:17.163 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:17.163 04:47:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:17.163 mke2fs 1.47.0 (5-Feb-2023) 00:10:17.163 Discarding device blocks: 0/522240 done 00:10:17.421 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:17.421 Filesystem UUID: 465b1bd1-dd3d-4bdc-92a9-2debdd31fdf9 00:10:17.421 Superblock backups stored on blocks: 00:10:17.421 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:17.421 00:10:17.421 Allocating group tables: 0/64 done 00:10:17.421 Writing inode tables: 0/64 done 00:10:19.323 Creating journal (8192 blocks): done 00:10:20.407 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:20.407 00:10:20.407 04:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:20.407 04:47:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 530347 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.973 00:10:26.973 real 0m9.389s 00:10:26.973 user 0m0.028s 00:10:26.973 sys 0m0.072s 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:26.973 ************************************ 00:10:26.973 END TEST filesystem_ext4 00:10:26.973 ************************************ 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.973 ************************************ 00:10:26.973 START TEST filesystem_btrfs 00:10:26.973 ************************************ 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.973 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.974 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:26.974 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:26.974 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.974 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:26.974 btrfs-progs v6.8.1 00:10:26.974 See https://btrfs.readthedocs.io for more information. 00:10:26.974 00:10:26.974 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:26.974 NOTE: several default settings have changed in version 5.15, please make sure 00:10:26.974 this does not affect your deployments: 00:10:26.974 - DUP for metadata (-m dup) 00:10:26.974 - enabled no-holes (-O no-holes) 00:10:26.974 - enabled free-space-tree (-R free-space-tree) 00:10:26.974 00:10:26.974 Label: (null) 00:10:26.974 UUID: 344dd59c-a88f-41e7-9766-0861440bfaab 00:10:26.974 Node size: 16384 00:10:26.974 Sector size: 4096 (CPU page size: 4096) 00:10:26.974 Filesystem size: 510.00MiB 00:10:26.974 Block group profiles: 00:10:26.974 Data: single 8.00MiB 00:10:26.974 Metadata: DUP 32.00MiB 00:10:26.974 System: DUP 8.00MiB 00:10:26.974 SSD detected: yes 00:10:26.974 Zoned device: no 00:10:26.974 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:26.974 Checksum: crc32c 00:10:26.974 Number of devices: 1 00:10:26.974 Devices: 00:10:26.974 ID SIZE PATH 00:10:26.974 1 510.00MiB /dev/nvme0n1p1 00:10:26.974 00:10:26.974 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:26.974 04:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 530347 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.233 00:10:27.233 real 0m0.707s 00:10:27.233 user 0m0.027s 00:10:27.233 sys 0m0.115s 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.233 ************************************ 00:10:27.233 END TEST filesystem_btrfs 00:10:27.233 ************************************ 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.233 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.492 ************************************ 00:10:27.492 START TEST filesystem_xfs 00:10:27.492 ************************************ 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.492 04:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:27.492 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:27.492 = sectsz=512 attr=2, projid32bit=1 00:10:27.492 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:27.492 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:27.492 data = bsize=4096 blocks=130560, imaxpct=25 00:10:27.492 = sunit=0 swidth=0 blks 00:10:27.492 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:27.492 log =internal log bsize=4096 blocks=16384, version=2 00:10:27.492 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:27.492 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:28.869 Discarding blocks...Done. 00:10:28.869 04:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.869 04:47:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.245 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 530347 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.504 00:10:30.504 real 0m3.038s 00:10:30.504 user 0m0.024s 00:10:30.504 sys 0m0.074s 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.504 ************************************ 00:10:30.504 END TEST filesystem_xfs 00:10:30.504 ************************************ 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:30.504 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 530347 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 530347 ']' 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 530347 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530347 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530347' 00:10:30.763 killing process with pid 530347 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 530347 00:10:30.763 04:47:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 530347 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:31.023 00:10:31.023 real 0m19.580s 00:10:31.023 user 1m17.137s 00:10:31.023 sys 0m1.431s 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.023 ************************************ 00:10:31.023 END TEST nvmf_filesystem_no_in_capsule 00:10:31.023 ************************************ 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.023 ************************************ 00:10:31.023 START TEST nvmf_filesystem_in_capsule 00:10:31.023 ************************************ 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=533718 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 533718 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 533718 ']' 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.023 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.282 [2024-12-10 04:47:22.189031] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:31.282 [2024-12-10 04:47:22.189074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.282 [2024-12-10 04:47:22.269472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.282 [2024-12-10 04:47:22.307891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.282 [2024-12-10 04:47:22.307932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.282 [2024-12-10 04:47:22.307938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.282 [2024-12-10 04:47:22.307944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.282 [2024-12-10 04:47:22.307949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.282 [2024-12-10 04:47:22.309223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.282 [2024-12-10 04:47:22.309332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.282 [2024-12-10 04:47:22.309437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.282 [2024-12-10 04:47:22.309438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.282 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.282 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:31.282 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.282 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.282 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.540 [2024-12-10 04:47:22.454873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.540 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.540 Malloc1 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.541 [2024-12-10 04:47:22.611349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:31.541 { 00:10:31.541 "name": "Malloc1", 00:10:31.541 "aliases": [ 00:10:31.541 "ac2ff984-6cf8-451e-b169-b1dd7b7f6a91" 00:10:31.541 ], 00:10:31.541 "product_name": "Malloc disk", 00:10:31.541 "block_size": 512, 00:10:31.541 "num_blocks": 1048576, 00:10:31.541 "uuid": "ac2ff984-6cf8-451e-b169-b1dd7b7f6a91", 00:10:31.541 "assigned_rate_limits": { 00:10:31.541 "rw_ios_per_sec": 0, 00:10:31.541 "rw_mbytes_per_sec": 0, 00:10:31.541 "r_mbytes_per_sec": 0, 00:10:31.541 "w_mbytes_per_sec": 0 00:10:31.541 }, 00:10:31.541 "claimed": true, 00:10:31.541 "claim_type": "exclusive_write", 00:10:31.541 "zoned": false, 00:10:31.541 "supported_io_types": { 00:10:31.541 "read": true, 00:10:31.541 "write": true, 00:10:31.541 "unmap": true, 00:10:31.541 "flush": true, 00:10:31.541 "reset": true, 00:10:31.541 "nvme_admin": false, 00:10:31.541 "nvme_io": false, 00:10:31.541 "nvme_io_md": false, 00:10:31.541 "write_zeroes": true, 00:10:31.541 "zcopy": true, 00:10:31.541 "get_zone_info": false, 00:10:31.541 "zone_management": false, 00:10:31.541 "zone_append": false, 00:10:31.541 "compare": false, 00:10:31.541 "compare_and_write": false, 00:10:31.541 "abort": true, 00:10:31.541 "seek_hole": false, 00:10:31.541 "seek_data": false, 00:10:31.541 "copy": true, 00:10:31.541 "nvme_iov_md": false 00:10:31.541 }, 00:10:31.541 "memory_domains": [ 00:10:31.541 { 00:10:31.541 "dma_device_id": "system", 00:10:31.541 "dma_device_type": 1 00:10:31.541 }, 00:10:31.541 { 00:10:31.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.541 "dma_device_type": 2 00:10:31.541 } 00:10:31.541 ], 00:10:31.541 "driver_specific": {} 00:10:31.541 } 00:10:31.541 ]' 00:10:31.541 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:31.800 04:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.736 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.736 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.736 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.736 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.736 04:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:35.274 04:47:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:35.274 04:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.652 ************************************ 00:10:36.652 START TEST filesystem_in_capsule_ext4 00:10:36.652 ************************************ 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:36.652 04:47:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.652 mke2fs 1.47.0 (5-Feb-2023) 00:10:36.652 Discarding device blocks: 0/522240 done 00:10:36.652 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:36.652 Filesystem UUID: 3136e305-e36d-455e-97fc-3fc700c8b673 00:10:36.652 Superblock backups stored on blocks: 00:10:36.652 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:36.652 00:10:36.652 Allocating group tables: 0/64 done 00:10:36.652 Writing inode tables: 0/64 done 00:10:36.912 Creating journal (8192 blocks): done 00:10:37.171 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.171 00:10:37.171 04:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:37.171 04:47:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 533718 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.442 00:10:42.442 real 0m6.130s 00:10:42.442 user 0m0.029s 00:10:42.442 sys 0m0.067s 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:42.442 ************************************ 00:10:42.442 END TEST filesystem_in_capsule_ext4 00:10:42.442 ************************************ 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.442 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.702 ************************************ 00:10:42.702 START TEST filesystem_in_capsule_btrfs 00:10:42.702 ************************************ 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:42.702 btrfs-progs v6.8.1 00:10:42.702 See https://btrfs.readthedocs.io for more information. 00:10:42.702 00:10:42.702 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:42.702 NOTE: several default settings have changed in version 5.15, please make sure 00:10:42.702 this does not affect your deployments: 00:10:42.702 - DUP for metadata (-m dup) 00:10:42.702 - enabled no-holes (-O no-holes) 00:10:42.702 - enabled free-space-tree (-R free-space-tree) 00:10:42.702 00:10:42.702 Label: (null) 00:10:42.702 UUID: 515064e5-4052-47e1-a7c9-4934e6937857 00:10:42.702 Node size: 16384 00:10:42.702 Sector size: 4096 (CPU page size: 4096) 00:10:42.702 Filesystem size: 510.00MiB 00:10:42.702 Block group profiles: 00:10:42.702 Data: single 8.00MiB 00:10:42.702 Metadata: DUP 32.00MiB 00:10:42.702 System: DUP 8.00MiB 00:10:42.702 SSD detected: yes 00:10:42.702 Zoned device: no 00:10:42.702 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:42.702 Checksum: crc32c 00:10:42.702 Number of devices: 1 00:10:42.702 Devices: 00:10:42.702 ID SIZE PATH 00:10:42.702 1 510.00MiB /dev/nvme0n1p1 00:10:42.702 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:42.702 04:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 533718 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.639 00:10:43.639 real 0m1.077s 00:10:43.639 user 0m0.024s 00:10:43.639 sys 0m0.118s 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:43.639 ************************************ 00:10:43.639 END TEST filesystem_in_capsule_btrfs 00:10:43.639 ************************************ 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.639 ************************************ 00:10:43.639 START TEST filesystem_in_capsule_xfs 00:10:43.639 ************************************ 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:43.639 04:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:43.899 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:43.899 = sectsz=512 attr=2, projid32bit=1 00:10:43.899 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:43.899 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:43.899 data = bsize=4096 blocks=130560, imaxpct=25 00:10:43.899 = sunit=0 swidth=0 blks 00:10:43.899 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:43.899 log =internal log bsize=4096 blocks=16384, version=2 00:10:43.899 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:43.899 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:44.465 Discarding blocks...Done. 00:10:44.465 04:47:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:44.465 04:47:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 533718 00:10:46.997 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.998 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.998 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.998 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.998 00:10:46.998 real 0m3.369s 00:10:46.998 user 0m0.023s 00:10:46.998 sys 0m0.072s 00:10:46.998 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.998 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:46.998 ************************************ 00:10:46.998 END TEST filesystem_in_capsule_xfs 00:10:46.998 ************************************ 00:10:47.256 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 533718 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 533718 ']' 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 533718 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 533718 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 533718' 00:10:47.515 killing process with pid 533718 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 533718 00:10:47.515 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 533718 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.083 00:10:48.083 real 0m16.824s 00:10:48.083 user 1m6.167s 00:10:48.083 sys 0m1.386s 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.083 ************************************ 00:10:48.083 END TEST nvmf_filesystem_in_capsule 00:10:48.083 ************************************ 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.083 04:47:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.083 rmmod nvme_tcp 00:10:48.083 rmmod nvme_fabrics 00:10:48.083 rmmod nvme_keyring 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.083 04:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.618 00:10:50.618 real 0m45.163s 00:10:50.618 user 2m25.365s 00:10:50.618 sys 0m7.517s 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.618 ************************************ 00:10:50.618 END TEST nvmf_filesystem 00:10:50.618 ************************************ 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.618 ************************************ 00:10:50.618 START TEST nvmf_target_discovery 00:10:50.618 ************************************ 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:50.618 * Looking for test storage... 00:10:50.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.618 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.619 --rc genhtml_branch_coverage=1 00:10:50.619 --rc genhtml_function_coverage=1 00:10:50.619 --rc genhtml_legend=1 00:10:50.619 --rc geninfo_all_blocks=1 00:10:50.619 --rc geninfo_unexecuted_blocks=1 00:10:50.619 00:10:50.619 ' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.619 --rc genhtml_branch_coverage=1 00:10:50.619 --rc genhtml_function_coverage=1 00:10:50.619 --rc genhtml_legend=1 00:10:50.619 --rc geninfo_all_blocks=1 00:10:50.619 --rc geninfo_unexecuted_blocks=1 00:10:50.619 00:10:50.619 ' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.619 --rc genhtml_branch_coverage=1 00:10:50.619 --rc genhtml_function_coverage=1 00:10:50.619 --rc genhtml_legend=1 00:10:50.619 --rc geninfo_all_blocks=1 00:10:50.619 --rc geninfo_unexecuted_blocks=1 00:10:50.619 00:10:50.619 ' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.619 --rc genhtml_branch_coverage=1 00:10:50.619 --rc genhtml_function_coverage=1 00:10:50.619 --rc genhtml_legend=1 00:10:50.619 --rc geninfo_all_blocks=1 00:10:50.619 --rc geninfo_unexecuted_blocks=1 00:10:50.619 00:10:50.619 ' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.619 04:47:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.895 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.895 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:56.155 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:56.155 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:56.155 Found net devices under 0000:af:00.0: cvl_0_0 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:56.155 Found net devices under 0000:af:00.1: cvl_0_1 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.155 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.156 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:10:56.415 00:10:56.415 --- 10.0.0.2 ping statistics --- 00:10:56.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.415 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:56.415 00:10:56.415 --- 10.0.0.1 ping statistics --- 00:10:56.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.415 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=540235 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 540235 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 540235 ']' 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.415 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.415 [2024-12-10 04:47:47.410895] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:56.415 [2024-12-10 04:47:47.410944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.415 [2024-12-10 04:47:47.490832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.415 [2024-12-10 04:47:47.531650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.415 [2024-12-10 04:47:47.531689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.415 [2024-12-10 04:47:47.531697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.415 [2024-12-10 04:47:47.531703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.415 [2024-12-10 04:47:47.531708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.415 [2024-12-10 04:47:47.533110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.415 [2024-12-10 04:47:47.533230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.415 [2024-12-10 04:47:47.533261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.415 [2024-12-10 04:47:47.533262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 [2024-12-10 04:47:47.669767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 Null1 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 [2024-12-10 04:47:47.722307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 Null2 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 Null3 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.675 Null4 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.675 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 04:47:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:56.935 00:10:56.935 Discovery Log Number of Records 6, Generation counter 6 00:10:56.935 =====Discovery Log Entry 0====== 00:10:56.935 trtype: tcp 00:10:56.935 adrfam: ipv4 00:10:56.935 subtype: current discovery subsystem 00:10:56.935 treq: not required 00:10:56.935 portid: 0 00:10:56.935 trsvcid: 4420 00:10:56.935 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:56.935 traddr: 10.0.0.2 00:10:56.935 eflags: explicit discovery connections, duplicate discovery information 00:10:56.935 sectype: none 00:10:56.935 =====Discovery Log Entry 1====== 00:10:56.935 trtype: tcp 00:10:56.935 adrfam: ipv4 00:10:56.935 subtype: nvme subsystem 00:10:56.935 treq: not required 00:10:56.935 portid: 0 00:10:56.935 trsvcid: 4420 00:10:56.935 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:56.935 traddr: 10.0.0.2 00:10:56.935 eflags: none 00:10:56.935 sectype: none 00:10:56.935 =====Discovery Log Entry 2====== 00:10:56.935 trtype: tcp 00:10:56.935 adrfam: ipv4 00:10:56.935 subtype: nvme subsystem 00:10:56.935 treq: not required 00:10:56.935 portid: 0 00:10:56.935 trsvcid: 4420 00:10:56.935 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:56.935 traddr: 10.0.0.2 00:10:56.935 eflags: none 00:10:56.935 sectype: none 00:10:56.935 =====Discovery Log Entry 3====== 00:10:56.935 trtype: tcp 00:10:56.935 adrfam: ipv4 00:10:56.935 subtype: nvme subsystem 00:10:56.935 treq: not required 00:10:56.935 portid: 0 00:10:56.935 trsvcid: 4420 00:10:56.935 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:56.935 traddr: 10.0.0.2 00:10:56.935 eflags: none 00:10:56.935 sectype: none 00:10:56.935 =====Discovery Log Entry 4====== 00:10:56.935 trtype: tcp 00:10:56.935 adrfam: ipv4 00:10:56.935 subtype: nvme subsystem 00:10:56.935 treq: not required 00:10:56.935 portid: 0 00:10:56.935 trsvcid: 4420 00:10:56.935 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:56.935 traddr: 10.0.0.2 00:10:56.935 eflags: none 00:10:56.935 sectype: none 00:10:56.935 =====Discovery Log Entry 5====== 00:10:56.935 trtype: tcp 00:10:56.935 adrfam: ipv4 00:10:56.935 subtype: discovery subsystem referral 00:10:56.935 treq: not required 00:10:56.935 portid: 0 00:10:56.935 trsvcid: 4430 00:10:56.935 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:56.935 traddr: 10.0.0.2 00:10:56.935 eflags: none 00:10:56.935 sectype: none 00:10:56.935 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:56.935 Perform nvmf subsystem discovery via RPC 00:10:56.935 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:56.935 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 [ 00:10:57.195 { 00:10:57.195 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:57.195 "subtype": "Discovery", 00:10:57.195 "listen_addresses": [ 00:10:57.195 { 00:10:57.195 "trtype": "TCP", 00:10:57.195 "adrfam": "IPv4", 00:10:57.195 "traddr": "10.0.0.2", 00:10:57.195 "trsvcid": "4420" 00:10:57.195 } 00:10:57.195 ], 00:10:57.195 "allow_any_host": true, 00:10:57.195 "hosts": [] 00:10:57.195 }, 00:10:57.195 { 00:10:57.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.195 "subtype": "NVMe", 00:10:57.195 "listen_addresses": [ 00:10:57.195 { 00:10:57.195 "trtype": "TCP", 00:10:57.195 "adrfam": "IPv4", 00:10:57.195 "traddr": "10.0.0.2", 00:10:57.195 "trsvcid": "4420" 00:10:57.195 } 00:10:57.195 ], 00:10:57.195 "allow_any_host": true, 00:10:57.195 "hosts": [], 00:10:57.195 "serial_number": "SPDK00000000000001", 00:10:57.195 "model_number": "SPDK bdev Controller", 00:10:57.195 "max_namespaces": 32, 00:10:57.195 "min_cntlid": 1, 00:10:57.195 "max_cntlid": 65519, 00:10:57.195 "namespaces": [ 00:10:57.195 { 00:10:57.195 "nsid": 1, 00:10:57.195 "bdev_name": "Null1", 00:10:57.195 "name": "Null1", 00:10:57.195 "nguid": "6746E0A2599D47ED8DABD85C4FF0C144", 00:10:57.195 "uuid": "6746e0a2-599d-47ed-8dab-d85c4ff0c144" 00:10:57.195 } 00:10:57.195 ] 00:10:57.195 }, 00:10:57.195 { 00:10:57.195 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:57.195 "subtype": "NVMe", 00:10:57.195 "listen_addresses": [ 00:10:57.195 { 00:10:57.195 "trtype": "TCP", 00:10:57.195 "adrfam": "IPv4", 00:10:57.195 "traddr": "10.0.0.2", 00:10:57.195 "trsvcid": "4420" 00:10:57.195 } 00:10:57.195 ], 00:10:57.195 "allow_any_host": true, 00:10:57.195 "hosts": [], 00:10:57.195 "serial_number": "SPDK00000000000002", 00:10:57.195 "model_number": "SPDK bdev Controller", 00:10:57.195 "max_namespaces": 32, 00:10:57.195 "min_cntlid": 1, 00:10:57.195 "max_cntlid": 65519, 00:10:57.195 "namespaces": [ 00:10:57.195 { 00:10:57.195 "nsid": 1, 00:10:57.195 "bdev_name": "Null2", 00:10:57.195 "name": "Null2", 00:10:57.195 "nguid": "0D774EDC84224AFCB4C51A4BDF6A03D4", 00:10:57.195 "uuid": "0d774edc-8422-4afc-b4c5-1a4bdf6a03d4" 00:10:57.195 } 00:10:57.195 ] 00:10:57.195 }, 00:10:57.195 { 00:10:57.195 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:57.195 "subtype": "NVMe", 00:10:57.195 "listen_addresses": [ 00:10:57.195 { 00:10:57.195 "trtype": "TCP", 00:10:57.195 "adrfam": "IPv4", 00:10:57.195 "traddr": "10.0.0.2", 00:10:57.195 "trsvcid": "4420" 00:10:57.195 } 00:10:57.195 ], 00:10:57.195 "allow_any_host": true, 00:10:57.195 "hosts": [], 00:10:57.195 "serial_number": "SPDK00000000000003", 00:10:57.195 "model_number": "SPDK bdev Controller", 00:10:57.195 "max_namespaces": 32, 00:10:57.195 "min_cntlid": 1, 00:10:57.195 "max_cntlid": 65519, 00:10:57.195 "namespaces": [ 00:10:57.195 { 00:10:57.195 "nsid": 1, 00:10:57.195 "bdev_name": "Null3", 00:10:57.195 "name": "Null3", 00:10:57.195 "nguid": "0B90D1DAA7C34CDD8898B50D12BD16D4", 00:10:57.195 "uuid": "0b90d1da-a7c3-4cdd-8898-b50d12bd16d4" 00:10:57.195 } 00:10:57.195 ] 00:10:57.195 }, 00:10:57.195 { 00:10:57.195 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:57.195 "subtype": "NVMe", 00:10:57.195 "listen_addresses": [ 00:10:57.195 { 00:10:57.195 "trtype": "TCP", 00:10:57.195 "adrfam": "IPv4", 00:10:57.195 "traddr": "10.0.0.2", 00:10:57.195 "trsvcid": "4420" 00:10:57.195 } 00:10:57.195 ], 00:10:57.195 "allow_any_host": true, 00:10:57.195 "hosts": [], 00:10:57.195 "serial_number": "SPDK00000000000004", 00:10:57.195 "model_number": "SPDK bdev Controller", 00:10:57.195 "max_namespaces": 32, 00:10:57.195 "min_cntlid": 1, 00:10:57.195 "max_cntlid": 65519, 00:10:57.195 "namespaces": [ 00:10:57.195 { 00:10:57.195 "nsid": 1, 00:10:57.195 "bdev_name": "Null4", 00:10:57.195 "name": "Null4", 00:10:57.195 "nguid": "4A96A381848841279937C43E812B14DB", 00:10:57.195 "uuid": "4a96a381-8488-4127-9937-c43e812b14db" 00:10:57.195 } 00:10:57.195 ] 00:10:57.195 } 00:10:57.195 ] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.196 rmmod nvme_tcp 00:10:57.196 rmmod nvme_fabrics 00:10:57.196 rmmod nvme_keyring 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 540235 ']' 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 540235 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 540235 ']' 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 540235 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.196 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 540235 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 540235' 00:10:57.455 killing process with pid 540235 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 540235 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 540235 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.455 04:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.992 00:10:59.992 real 0m9.349s 00:10:59.992 user 0m5.729s 00:10:59.992 sys 0m4.768s 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:59.992 ************************************ 00:10:59.992 END TEST nvmf_target_discovery 00:10:59.992 ************************************ 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.992 ************************************ 00:10:59.992 START TEST nvmf_referrals 00:10:59.992 ************************************ 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:59.992 * Looking for test storage... 00:10:59.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.992 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.992 --rc genhtml_branch_coverage=1 00:10:59.992 --rc genhtml_function_coverage=1 00:10:59.992 --rc genhtml_legend=1 00:10:59.992 --rc geninfo_all_blocks=1 00:10:59.992 --rc geninfo_unexecuted_blocks=1 00:10:59.993 00:10:59.993 ' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.993 --rc genhtml_branch_coverage=1 00:10:59.993 --rc genhtml_function_coverage=1 00:10:59.993 --rc genhtml_legend=1 00:10:59.993 --rc geninfo_all_blocks=1 00:10:59.993 --rc geninfo_unexecuted_blocks=1 00:10:59.993 00:10:59.993 ' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.993 --rc genhtml_branch_coverage=1 00:10:59.993 --rc genhtml_function_coverage=1 00:10:59.993 --rc genhtml_legend=1 00:10:59.993 --rc geninfo_all_blocks=1 00:10:59.993 --rc geninfo_unexecuted_blocks=1 00:10:59.993 00:10:59.993 ' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.993 --rc genhtml_branch_coverage=1 00:10:59.993 --rc genhtml_function_coverage=1 00:10:59.993 --rc genhtml_legend=1 00:10:59.993 --rc geninfo_all_blocks=1 00:10:59.993 --rc geninfo_unexecuted_blocks=1 00:10:59.993 00:10:59.993 ' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.993 04:47:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:06.658 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:06.659 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:06.659 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:06.659 Found net devices under 0000:af:00.0: cvl_0_0 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:06.659 Found net devices under 0000:af:00.1: cvl_0_1 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:11:06.659 00:11:06.659 --- 10.0.0.2 ping statistics --- 00:11:06.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.659 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:06.659 00:11:06.659 --- 10.0.0.1 ping statistics --- 00:11:06.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.659 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.659 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=543845 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 543845 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 543845 ']' 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.660 04:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 [2024-12-10 04:47:56.989941] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:06.660 [2024-12-10 04:47:56.989992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.660 [2024-12-10 04:47:57.069004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.660 [2024-12-10 04:47:57.109880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.660 [2024-12-10 04:47:57.109916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.660 [2024-12-10 04:47:57.109927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.660 [2024-12-10 04:47:57.109934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.660 [2024-12-10 04:47:57.109939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.660 [2024-12-10 04:47:57.111452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.660 [2024-12-10 04:47:57.111560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.660 [2024-12-10 04:47:57.111593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.660 [2024-12-10 04:47:57.111595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 [2024-12-10 04:47:57.249578] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 [2024-12-10 04:47:57.280330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.660 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.920 04:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.920 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:06.920 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:06.920 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:06.920 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:06.920 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.178 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.436 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.695 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.954 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:07.954 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.954 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:07.954 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.954 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.954 04:47:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.954 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.213 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.213 rmmod nvme_tcp 00:11:08.213 rmmod nvme_fabrics 00:11:08.472 rmmod nvme_keyring 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 543845 ']' 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 543845 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 543845 ']' 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 543845 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543845 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543845' 00:11:08.472 killing process with pid 543845 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 543845 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 543845 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.472 04:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.010 00:11:11.010 real 0m11.030s 00:11:11.010 user 0m12.540s 00:11:11.010 sys 0m5.175s 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.010 ************************************ 00:11:11.010 END TEST nvmf_referrals 00:11:11.010 ************************************ 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.010 ************************************ 00:11:11.010 START TEST nvmf_connect_disconnect 00:11:11.010 ************************************ 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.010 * Looking for test storage... 00:11:11.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.010 --rc genhtml_branch_coverage=1 00:11:11.010 --rc genhtml_function_coverage=1 00:11:11.010 --rc genhtml_legend=1 00:11:11.010 --rc geninfo_all_blocks=1 00:11:11.010 --rc geninfo_unexecuted_blocks=1 00:11:11.010 00:11:11.010 ' 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.010 --rc genhtml_branch_coverage=1 00:11:11.010 --rc genhtml_function_coverage=1 00:11:11.010 --rc genhtml_legend=1 00:11:11.010 --rc geninfo_all_blocks=1 00:11:11.010 --rc geninfo_unexecuted_blocks=1 00:11:11.010 00:11:11.010 ' 00:11:11.010 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.011 --rc genhtml_branch_coverage=1 00:11:11.011 --rc genhtml_function_coverage=1 00:11:11.011 --rc genhtml_legend=1 00:11:11.011 --rc geninfo_all_blocks=1 00:11:11.011 --rc geninfo_unexecuted_blocks=1 00:11:11.011 00:11:11.011 ' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.011 --rc genhtml_branch_coverage=1 00:11:11.011 --rc genhtml_function_coverage=1 00:11:11.011 --rc genhtml_legend=1 00:11:11.011 --rc geninfo_all_blocks=1 00:11:11.011 --rc geninfo_unexecuted_blocks=1 00:11:11.011 00:11:11.011 ' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.011 04:48:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:17.582 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:17.582 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:17.582 Found net devices under 0000:af:00.0: cvl_0_0 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:17.582 Found net devices under 0000:af:00.1: cvl_0_1 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.582 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:11:17.583 00:11:17.583 --- 10.0.0.2 ping statistics --- 00:11:17.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.583 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:11:17.583 00:11:17.583 --- 10.0.0.1 ping statistics --- 00:11:17.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.583 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=548372 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 548372 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 548372 ']' 00:11:17.583 04:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 [2024-12-10 04:48:08.044938] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:17.583 [2024-12-10 04:48:08.044981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.583 [2024-12-10 04:48:08.122665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.583 [2024-12-10 04:48:08.163415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.583 [2024-12-10 04:48:08.163450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.583 [2024-12-10 04:48:08.163457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.583 [2024-12-10 04:48:08.163463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.583 [2024-12-10 04:48:08.163468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.583 [2024-12-10 04:48:08.164931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.583 [2024-12-10 04:48:08.165041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.583 [2024-12-10 04:48:08.165147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.583 [2024-12-10 04:48:08.165148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 [2024-12-10 04:48:08.310300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.583 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.584 [2024-12-10 04:48:08.370662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.584 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.584 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:17.584 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:17.584 04:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:20.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.029 rmmod nvme_tcp 00:11:34.029 rmmod nvme_fabrics 00:11:34.029 rmmod nvme_keyring 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 548372 ']' 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 548372 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 548372 ']' 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 548372 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548372 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548372' 00:11:34.029 killing process with pid 548372 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 548372 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 548372 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.029 04:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.029 04:48:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.029 04:48:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.029 04:48:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.029 04:48:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.029 04:48:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.934 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.193 00:11:36.194 real 0m25.331s 00:11:36.194 user 1m8.552s 00:11:36.194 sys 0m5.795s 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:36.194 ************************************ 00:11:36.194 END TEST nvmf_connect_disconnect 00:11:36.194 ************************************ 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.194 ************************************ 00:11:36.194 START TEST nvmf_multitarget 00:11:36.194 ************************************ 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:36.194 * Looking for test storage... 00:11:36.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:36.194 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.454 --rc genhtml_branch_coverage=1 00:11:36.454 --rc genhtml_function_coverage=1 00:11:36.454 --rc genhtml_legend=1 00:11:36.454 --rc geninfo_all_blocks=1 00:11:36.454 --rc geninfo_unexecuted_blocks=1 00:11:36.454 00:11:36.454 ' 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.454 --rc genhtml_branch_coverage=1 00:11:36.454 --rc genhtml_function_coverage=1 00:11:36.454 --rc genhtml_legend=1 00:11:36.454 --rc geninfo_all_blocks=1 00:11:36.454 --rc geninfo_unexecuted_blocks=1 00:11:36.454 00:11:36.454 ' 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.454 --rc genhtml_branch_coverage=1 00:11:36.454 --rc genhtml_function_coverage=1 00:11:36.454 --rc genhtml_legend=1 00:11:36.454 --rc geninfo_all_blocks=1 00:11:36.454 --rc geninfo_unexecuted_blocks=1 00:11:36.454 00:11:36.454 ' 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.454 --rc genhtml_branch_coverage=1 00:11:36.454 --rc genhtml_function_coverage=1 00:11:36.454 --rc genhtml_legend=1 00:11:36.454 --rc geninfo_all_blocks=1 00:11:36.454 --rc geninfo_unexecuted_blocks=1 00:11:36.454 00:11:36.454 ' 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:36.454 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.455 04:48:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:43.026 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:43.026 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:43.026 Found net devices under 0000:af:00.0: cvl_0_0 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:43.026 Found net devices under 0000:af:00.1: cvl_0_1 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.026 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.027 04:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:11:43.027 00:11:43.027 --- 10.0.0.2 ping statistics --- 00:11:43.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.027 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:11:43.027 00:11:43.027 --- 10.0.0.1 ping statistics --- 00:11:43.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.027 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=554815 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 554815 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 554815 ']' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.027 [2024-12-10 04:48:33.345267] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:43.027 [2024-12-10 04:48:33.345318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.027 [2024-12-10 04:48:33.423465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.027 [2024-12-10 04:48:33.464627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.027 [2024-12-10 04:48:33.464663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.027 [2024-12-10 04:48:33.464670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.027 [2024-12-10 04:48:33.464676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.027 [2024-12-10 04:48:33.464681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.027 [2024-12-10 04:48:33.466047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.027 [2024-12-10 04:48:33.466081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.027 [2024-12-10 04:48:33.466205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.027 [2024-12-10 04:48:33.466206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:43.027 "nvmf_tgt_1" 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:43.027 "nvmf_tgt_2" 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.027 04:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:43.027 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:43.027 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:43.027 true 00:11:43.027 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:43.286 true 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.286 rmmod nvme_tcp 00:11:43.286 rmmod nvme_fabrics 00:11:43.286 rmmod nvme_keyring 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.286 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:43.545 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:43.545 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 554815 ']' 00:11:43.545 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 554815 00:11:43.545 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 554815 ']' 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 554815 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 554815 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 554815' 00:11:43.546 killing process with pid 554815 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 554815 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 554815 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.546 04:48:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.081 00:11:46.081 real 0m9.559s 00:11:46.081 user 0m7.194s 00:11:46.081 sys 0m4.840s 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:46.081 ************************************ 00:11:46.081 END TEST nvmf_multitarget 00:11:46.081 ************************************ 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.081 ************************************ 00:11:46.081 START TEST nvmf_rpc 00:11:46.081 ************************************ 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.081 * Looking for test storage... 00:11:46.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.081 --rc genhtml_branch_coverage=1 00:11:46.081 --rc genhtml_function_coverage=1 00:11:46.081 --rc genhtml_legend=1 00:11:46.081 --rc geninfo_all_blocks=1 00:11:46.081 --rc geninfo_unexecuted_blocks=1 00:11:46.081 00:11:46.081 ' 00:11:46.081 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.081 --rc genhtml_branch_coverage=1 00:11:46.081 --rc genhtml_function_coverage=1 00:11:46.081 --rc genhtml_legend=1 00:11:46.081 --rc geninfo_all_blocks=1 00:11:46.081 --rc geninfo_unexecuted_blocks=1 00:11:46.081 00:11:46.081 ' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.082 --rc genhtml_branch_coverage=1 00:11:46.082 --rc genhtml_function_coverage=1 00:11:46.082 --rc genhtml_legend=1 00:11:46.082 --rc geninfo_all_blocks=1 00:11:46.082 --rc geninfo_unexecuted_blocks=1 00:11:46.082 00:11:46.082 ' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.082 --rc genhtml_branch_coverage=1 00:11:46.082 --rc genhtml_function_coverage=1 00:11:46.082 --rc genhtml_legend=1 00:11:46.082 --rc geninfo_all_blocks=1 00:11:46.082 --rc geninfo_unexecuted_blocks=1 00:11:46.082 00:11:46.082 ' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.082 04:48:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:52.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:52.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:52.653 Found net devices under 0000:af:00.0: cvl_0_0 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:52.653 Found net devices under 0000:af:00.1: cvl_0_1 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.653 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:11:52.654 00:11:52.654 --- 10.0.0.2 ping statistics --- 00:11:52.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.654 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:52.654 00:11:52.654 --- 10.0.0.1 ping statistics --- 00:11:52.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.654 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.654 04:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=558540 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 558540 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 558540 ']' 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.654 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.654 [2024-12-10 04:48:43.058199] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:52.654 [2024-12-10 04:48:43.058241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.654 [2024-12-10 04:48:43.132918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.654 [2024-12-10 04:48:43.174066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.654 [2024-12-10 04:48:43.174100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.654 [2024-12-10 04:48:43.174107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.654 [2024-12-10 04:48:43.174113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.654 [2024-12-10 04:48:43.174118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.654 [2024-12-10 04:48:43.175536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.654 [2024-12-10 04:48:43.175647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.654 [2024-12-10 04:48:43.175751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.654 [2024-12-10 04:48:43.175752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:52.913 "tick_rate": 2100000000, 00:11:52.913 "poll_groups": [ 00:11:52.913 { 00:11:52.913 "name": "nvmf_tgt_poll_group_000", 00:11:52.913 "admin_qpairs": 0, 00:11:52.913 "io_qpairs": 0, 00:11:52.913 "current_admin_qpairs": 0, 00:11:52.913 "current_io_qpairs": 0, 00:11:52.913 "pending_bdev_io": 0, 00:11:52.913 "completed_nvme_io": 0, 00:11:52.913 "transports": [] 00:11:52.913 }, 00:11:52.913 { 00:11:52.913 "name": "nvmf_tgt_poll_group_001", 00:11:52.913 "admin_qpairs": 0, 00:11:52.913 "io_qpairs": 0, 00:11:52.913 "current_admin_qpairs": 0, 00:11:52.913 "current_io_qpairs": 0, 00:11:52.913 "pending_bdev_io": 0, 00:11:52.913 "completed_nvme_io": 0, 00:11:52.913 "transports": [] 00:11:52.913 }, 00:11:52.913 { 00:11:52.913 "name": "nvmf_tgt_poll_group_002", 00:11:52.913 "admin_qpairs": 0, 00:11:52.913 "io_qpairs": 0, 00:11:52.913 "current_admin_qpairs": 0, 00:11:52.913 "current_io_qpairs": 0, 00:11:52.913 "pending_bdev_io": 0, 00:11:52.913 "completed_nvme_io": 0, 00:11:52.913 "transports": [] 00:11:52.913 }, 00:11:52.913 { 00:11:52.913 "name": "nvmf_tgt_poll_group_003", 00:11:52.913 "admin_qpairs": 0, 00:11:52.913 "io_qpairs": 0, 00:11:52.913 "current_admin_qpairs": 0, 00:11:52.913 "current_io_qpairs": 0, 00:11:52.913 "pending_bdev_io": 0, 00:11:52.913 "completed_nvme_io": 0, 00:11:52.913 "transports": [] 00:11:52.913 } 00:11:52.913 ] 00:11:52.913 }' 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:52.913 04:48:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 [2024-12-10 04:48:44.027341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.913 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.172 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:53.172 "tick_rate": 2100000000, 00:11:53.172 "poll_groups": [ 00:11:53.172 { 00:11:53.172 "name": "nvmf_tgt_poll_group_000", 00:11:53.172 "admin_qpairs": 0, 00:11:53.172 "io_qpairs": 0, 00:11:53.172 "current_admin_qpairs": 0, 00:11:53.172 "current_io_qpairs": 0, 00:11:53.172 "pending_bdev_io": 0, 00:11:53.172 "completed_nvme_io": 0, 00:11:53.172 "transports": [ 00:11:53.172 { 00:11:53.172 "trtype": "TCP" 00:11:53.172 } 00:11:53.172 ] 00:11:53.172 }, 00:11:53.172 { 00:11:53.172 "name": "nvmf_tgt_poll_group_001", 00:11:53.172 "admin_qpairs": 0, 00:11:53.172 "io_qpairs": 0, 00:11:53.173 "current_admin_qpairs": 0, 00:11:53.173 "current_io_qpairs": 0, 00:11:53.173 "pending_bdev_io": 0, 00:11:53.173 "completed_nvme_io": 0, 00:11:53.173 "transports": [ 00:11:53.173 { 00:11:53.173 "trtype": "TCP" 00:11:53.173 } 00:11:53.173 ] 00:11:53.173 }, 00:11:53.173 { 00:11:53.173 "name": "nvmf_tgt_poll_group_002", 00:11:53.173 "admin_qpairs": 0, 00:11:53.173 "io_qpairs": 0, 00:11:53.173 "current_admin_qpairs": 0, 00:11:53.173 "current_io_qpairs": 0, 00:11:53.173 "pending_bdev_io": 0, 00:11:53.173 "completed_nvme_io": 0, 00:11:53.173 "transports": [ 00:11:53.173 { 00:11:53.173 "trtype": "TCP" 00:11:53.173 } 00:11:53.173 ] 00:11:53.173 }, 00:11:53.173 { 00:11:53.173 "name": "nvmf_tgt_poll_group_003", 00:11:53.173 "admin_qpairs": 0, 00:11:53.173 "io_qpairs": 0, 00:11:53.173 "current_admin_qpairs": 0, 00:11:53.173 "current_io_qpairs": 0, 00:11:53.173 "pending_bdev_io": 0, 00:11:53.173 "completed_nvme_io": 0, 00:11:53.173 "transports": [ 00:11:53.173 { 00:11:53.173 "trtype": "TCP" 00:11:53.173 } 00:11:53.173 ] 00:11:53.173 } 00:11:53.173 ] 00:11:53.173 }' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 Malloc1 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 [2024-12-10 04:48:44.212117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:53.173 [2024-12-10 04:48:44.246715] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:53.173 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:53.173 could not add new controller: failed to write to nvme-fabrics device 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.173 04:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.553 04:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.553 04:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.553 04:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.553 04:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.553 04:48:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.457 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:56.458 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.458 [2024-12-10 04:48:47.571138] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:56.717 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:56.717 could not add new controller: failed to write to nvme-fabrics device 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.717 04:48:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.095 04:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.095 04:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.095 04:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.095 04:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.095 04:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.999 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.999 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.999 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.999 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.000 [2024-12-10 04:48:50.934672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.000 04:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.378 04:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.378 04:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.378 04:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.378 04:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.378 04:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.283 [2024-12-10 04:48:54.349172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.283 04:48:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.666 04:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.666 04:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.666 04:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.666 04:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:04.666 04:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.571 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.831 [2024-12-10 04:48:57.712548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.831 04:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.767 04:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.767 04:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.767 04:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.767 04:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.767 04:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.857 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.116 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.116 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.116 04:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 [2024-12-10 04:49:01.021897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.116 04:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.493 04:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.493 04:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:11.493 04:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.493 04:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:11.493 04:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:13.399 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:13.399 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:13.399 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 [2024-12-10 04:49:04.361270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.400 04:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.778 04:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.778 04:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.778 04:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.778 04:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.778 04:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.683 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 [2024-12-10 04:49:07.643185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 [2024-12-10 04:49:07.691236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 [2024-12-10 04:49:07.739340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 [2024-12-10 04:49:07.787498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 [2024-12-10 04:49:07.835670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:16.944 "tick_rate": 2100000000, 00:12:16.944 "poll_groups": [ 00:12:16.944 { 00:12:16.944 "name": "nvmf_tgt_poll_group_000", 00:12:16.944 "admin_qpairs": 2, 00:12:16.944 "io_qpairs": 168, 00:12:16.944 "current_admin_qpairs": 0, 00:12:16.944 "current_io_qpairs": 0, 00:12:16.944 "pending_bdev_io": 0, 00:12:16.944 "completed_nvme_io": 264, 00:12:16.944 "transports": [ 00:12:16.944 { 00:12:16.944 "trtype": "TCP" 00:12:16.944 } 00:12:16.944 ] 00:12:16.944 }, 00:12:16.944 { 00:12:16.944 "name": "nvmf_tgt_poll_group_001", 00:12:16.944 "admin_qpairs": 2, 00:12:16.944 "io_qpairs": 168, 00:12:16.944 "current_admin_qpairs": 0, 00:12:16.944 "current_io_qpairs": 0, 00:12:16.944 "pending_bdev_io": 0, 00:12:16.944 "completed_nvme_io": 217, 00:12:16.944 "transports": [ 00:12:16.944 { 00:12:16.944 "trtype": "TCP" 00:12:16.944 } 00:12:16.944 ] 00:12:16.944 }, 00:12:16.944 { 00:12:16.944 "name": "nvmf_tgt_poll_group_002", 00:12:16.944 "admin_qpairs": 1, 00:12:16.944 "io_qpairs": 168, 00:12:16.944 "current_admin_qpairs": 0, 00:12:16.944 "current_io_qpairs": 0, 00:12:16.944 "pending_bdev_io": 0, 00:12:16.944 "completed_nvme_io": 273, 00:12:16.944 "transports": [ 00:12:16.944 { 00:12:16.944 "trtype": "TCP" 00:12:16.944 } 00:12:16.944 ] 00:12:16.944 }, 00:12:16.944 { 00:12:16.944 "name": "nvmf_tgt_poll_group_003", 00:12:16.944 "admin_qpairs": 2, 00:12:16.944 "io_qpairs": 168, 00:12:16.944 "current_admin_qpairs": 0, 00:12:16.944 "current_io_qpairs": 0, 00:12:16.944 "pending_bdev_io": 0, 00:12:16.944 "completed_nvme_io": 268, 00:12:16.944 "transports": [ 00:12:16.944 { 00:12:16.944 "trtype": "TCP" 00:12:16.944 } 00:12:16.944 ] 00:12:16.944 } 00:12:16.944 ] 00:12:16.944 }' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.944 04:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.944 rmmod nvme_tcp 00:12:16.944 rmmod nvme_fabrics 00:12:16.944 rmmod nvme_keyring 00:12:16.944 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.944 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 558540 ']' 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 558540 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 558540 ']' 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 558540 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.945 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 558540 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 558540' 00:12:17.204 killing process with pid 558540 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 558540 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 558540 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.204 04:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.740 00:12:19.740 real 0m33.591s 00:12:19.740 user 1m41.867s 00:12:19.740 sys 0m6.590s 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 ************************************ 00:12:19.740 END TEST nvmf_rpc 00:12:19.740 ************************************ 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.740 ************************************ 00:12:19.740 START TEST nvmf_invalid 00:12:19.740 ************************************ 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:19.740 * Looking for test storage... 00:12:19.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.740 --rc genhtml_branch_coverage=1 00:12:19.740 --rc genhtml_function_coverage=1 00:12:19.740 --rc genhtml_legend=1 00:12:19.740 --rc geninfo_all_blocks=1 00:12:19.740 --rc geninfo_unexecuted_blocks=1 00:12:19.740 00:12:19.740 ' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.740 --rc genhtml_branch_coverage=1 00:12:19.740 --rc genhtml_function_coverage=1 00:12:19.740 --rc genhtml_legend=1 00:12:19.740 --rc geninfo_all_blocks=1 00:12:19.740 --rc geninfo_unexecuted_blocks=1 00:12:19.740 00:12:19.740 ' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.740 --rc genhtml_branch_coverage=1 00:12:19.740 --rc genhtml_function_coverage=1 00:12:19.740 --rc genhtml_legend=1 00:12:19.740 --rc geninfo_all_blocks=1 00:12:19.740 --rc geninfo_unexecuted_blocks=1 00:12:19.740 00:12:19.740 ' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.740 --rc genhtml_branch_coverage=1 00:12:19.740 --rc genhtml_function_coverage=1 00:12:19.740 --rc genhtml_legend=1 00:12:19.740 --rc geninfo_all_blocks=1 00:12:19.740 --rc geninfo_unexecuted_blocks=1 00:12:19.740 00:12:19.740 ' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.740 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.741 04:49:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.312 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:26.313 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:26.313 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:26.313 Found net devices under 0000:af:00.0: cvl_0_0 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:26.313 Found net devices under 0000:af:00.1: cvl_0_1 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:12:26.313 00:12:26.313 --- 10.0.0.2 ping statistics --- 00:12:26.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.313 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:12:26.313 00:12:26.313 --- 10.0.0.1 ping statistics --- 00:12:26.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.313 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.313 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=566204 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 566204 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 566204 ']' 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.314 04:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.314 [2024-12-10 04:49:16.620833] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:26.314 [2024-12-10 04:49:16.620875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.314 [2024-12-10 04:49:16.695688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.314 [2024-12-10 04:49:16.736948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.314 [2024-12-10 04:49:16.736980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.314 [2024-12-10 04:49:16.736986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.314 [2024-12-10 04:49:16.736992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.314 [2024-12-10 04:49:16.736997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.314 [2024-12-10 04:49:16.738447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.314 [2024-12-10 04:49:16.738553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.314 [2024-12-10 04:49:16.738657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.314 [2024-12-10 04:49:16.738658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21184 00:12:26.573 [2024-12-10 04:49:17.666861] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:26.573 { 00:12:26.573 "nqn": "nqn.2016-06.io.spdk:cnode21184", 00:12:26.573 "tgt_name": "foobar", 00:12:26.573 "method": "nvmf_create_subsystem", 00:12:26.573 "req_id": 1 00:12:26.573 } 00:12:26.573 Got JSON-RPC error response 00:12:26.573 response: 00:12:26.573 { 00:12:26.573 "code": -32603, 00:12:26.573 "message": "Unable to find target foobar" 00:12:26.573 }' 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:26.573 { 00:12:26.573 "nqn": "nqn.2016-06.io.spdk:cnode21184", 00:12:26.573 "tgt_name": "foobar", 00:12:26.573 "method": "nvmf_create_subsystem", 00:12:26.573 "req_id": 1 00:12:26.573 } 00:12:26.573 Got JSON-RPC error response 00:12:26.573 response: 00:12:26.573 { 00:12:26.573 "code": -32603, 00:12:26.573 "message": "Unable to find target foobar" 00:12:26.573 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:26.573 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17589 00:12:26.832 [2024-12-10 04:49:17.867611] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17589: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:26.832 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:26.832 { 00:12:26.832 "nqn": "nqn.2016-06.io.spdk:cnode17589", 00:12:26.832 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:26.832 "method": "nvmf_create_subsystem", 00:12:26.832 "req_id": 1 00:12:26.832 } 00:12:26.832 Got JSON-RPC error response 00:12:26.832 response: 00:12:26.832 { 00:12:26.832 "code": -32602, 00:12:26.832 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:26.832 }' 00:12:26.832 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:26.832 { 00:12:26.832 "nqn": "nqn.2016-06.io.spdk:cnode17589", 00:12:26.832 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:26.832 "method": "nvmf_create_subsystem", 00:12:26.832 "req_id": 1 00:12:26.832 } 00:12:26.832 Got JSON-RPC error response 00:12:26.832 response: 00:12:26.832 { 00:12:26.832 "code": -32602, 00:12:26.832 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:26.832 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.832 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:26.832 04:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29433 00:12:27.091 [2024-12-10 04:49:18.068239] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29433: invalid model number 'SPDK_Controller' 00:12:27.091 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:27.091 { 00:12:27.091 "nqn": "nqn.2016-06.io.spdk:cnode29433", 00:12:27.091 "model_number": "SPDK_Controller\u001f", 00:12:27.091 "method": "nvmf_create_subsystem", 00:12:27.091 "req_id": 1 00:12:27.091 } 00:12:27.091 Got JSON-RPC error response 00:12:27.091 response: 00:12:27.091 { 00:12:27.091 "code": -32602, 00:12:27.091 "message": "Invalid MN SPDK_Controller\u001f" 00:12:27.091 }' 00:12:27.091 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:27.091 { 00:12:27.091 "nqn": "nqn.2016-06.io.spdk:cnode29433", 00:12:27.092 "model_number": "SPDK_Controller\u001f", 00:12:27.092 "method": "nvmf_create_subsystem", 00:12:27.092 "req_id": 1 00:12:27.092 } 00:12:27.092 Got JSON-RPC error response 00:12:27.092 response: 00:12:27.092 { 00:12:27.092 "code": -32602, 00:12:27.092 "message": "Invalid MN SPDK_Controller\u001f" 00:12:27.092 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:27.092 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']]n!HeP,+!lmn]_mN JFP' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']]n!HeP,+!lmn]_mN JFP' nqn.2016-06.io.spdk:cnode8740 00:12:27.352 [2024-12-10 04:49:18.413402] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8740: invalid serial number ']]n!HeP,+!lmn]_mN JFP' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:27.352 { 00:12:27.352 "nqn": "nqn.2016-06.io.spdk:cnode8740", 00:12:27.352 "serial_number": "]]n!HeP,+!lmn]_mN JFP", 00:12:27.352 "method": "nvmf_create_subsystem", 00:12:27.352 "req_id": 1 00:12:27.352 } 00:12:27.352 Got JSON-RPC error response 00:12:27.352 response: 00:12:27.352 { 00:12:27.352 "code": -32602, 00:12:27.352 "message": "Invalid SN ]]n!HeP,+!lmn]_mN JFP" 00:12:27.352 }' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:27.352 { 00:12:27.352 "nqn": "nqn.2016-06.io.spdk:cnode8740", 00:12:27.352 "serial_number": "]]n!HeP,+!lmn]_mN JFP", 00:12:27.352 "method": "nvmf_create_subsystem", 00:12:27.352 "req_id": 1 00:12:27.352 } 00:12:27.352 Got JSON-RPC error response 00:12:27.352 response: 00:12:27.352 { 00:12:27.352 "code": -32602, 00:12:27.352 "message": "Invalid SN ]]n!HeP,+!lmn]_mN JFP" 00:12:27.352 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.352 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.612 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.613 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vW\1SuG'\''b8K]9Pl(a|8HGY>+1gqmO;`' 00:12:27.614 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'vW\1SuG'\''b8K]9Pl(a|8HGY>+1gqmO;`' nqn.2016-06.io.spdk:cnode1927 00:12:27.873 [2024-12-10 04:49:18.899072] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1927: invalid model number 'vW\1SuG'b8K]9Pl(a|8HGY>+1gqmO;`' 00:12:27.873 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:27.873 { 00:12:27.873 "nqn": "nqn.2016-06.io.spdk:cnode1927", 00:12:27.873 "model_number": "vW\\1SuG'\''b8K]9Pl(a|8HGY>\u007f+1gqmO;`", 00:12:27.873 "method": "nvmf_create_subsystem", 00:12:27.873 "req_id": 1 00:12:27.873 } 00:12:27.873 Got JSON-RPC error response 00:12:27.873 response: 00:12:27.873 { 00:12:27.873 "code": -32602, 00:12:27.873 "message": "Invalid MN vW\\1SuG'\''b8K]9Pl(a|8HGY>\u007f+1gqmO;`" 00:12:27.873 }' 00:12:27.873 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:27.873 { 00:12:27.873 "nqn": "nqn.2016-06.io.spdk:cnode1927", 00:12:27.873 "model_number": "vW\\1SuG'b8K]9Pl(a|8HGY>\u007f+1gqmO;`", 00:12:27.873 "method": "nvmf_create_subsystem", 00:12:27.873 "req_id": 1 00:12:27.873 } 00:12:27.873 Got JSON-RPC error response 00:12:27.873 response: 00:12:27.873 { 00:12:27.873 "code": -32602, 00:12:27.873 "message": "Invalid MN vW\\1SuG'b8K]9Pl(a|8HGY>\u007f+1gqmO;`" 00:12:27.873 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:27.873 04:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:28.132 [2024-12-10 04:49:19.095788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.132 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:28.390 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:28.390 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:28.390 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:28.390 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:28.390 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:28.390 [2024-12-10 04:49:19.505137] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:28.649 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:28.649 { 00:12:28.649 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:28.649 "listen_address": { 00:12:28.649 "trtype": "tcp", 00:12:28.649 "traddr": "", 00:12:28.649 "trsvcid": "4421" 00:12:28.649 }, 00:12:28.650 "method": "nvmf_subsystem_remove_listener", 00:12:28.650 "req_id": 1 00:12:28.650 } 00:12:28.650 Got JSON-RPC error response 00:12:28.650 response: 00:12:28.650 { 00:12:28.650 "code": -32602, 00:12:28.650 "message": "Invalid parameters" 00:12:28.650 }' 00:12:28.650 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:28.650 { 00:12:28.650 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:28.650 "listen_address": { 00:12:28.650 "trtype": "tcp", 00:12:28.650 "traddr": "", 00:12:28.650 "trsvcid": "4421" 00:12:28.650 }, 00:12:28.650 "method": "nvmf_subsystem_remove_listener", 00:12:28.650 "req_id": 1 00:12:28.650 } 00:12:28.650 Got JSON-RPC error response 00:12:28.650 response: 00:12:28.650 { 00:12:28.650 "code": -32602, 00:12:28.650 "message": "Invalid parameters" 00:12:28.650 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:28.650 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2425 -i 0 00:12:28.650 [2024-12-10 04:49:19.693728] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2425: invalid cntlid range [0-65519] 00:12:28.650 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:28.650 { 00:12:28.650 "nqn": "nqn.2016-06.io.spdk:cnode2425", 00:12:28.650 "min_cntlid": 0, 00:12:28.650 "method": "nvmf_create_subsystem", 00:12:28.650 "req_id": 1 00:12:28.650 } 00:12:28.650 Got JSON-RPC error response 00:12:28.650 response: 00:12:28.650 { 00:12:28.650 "code": -32602, 00:12:28.650 "message": "Invalid cntlid range [0-65519]" 00:12:28.650 }' 00:12:28.650 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:28.650 { 00:12:28.650 "nqn": "nqn.2016-06.io.spdk:cnode2425", 00:12:28.650 "min_cntlid": 0, 00:12:28.650 "method": "nvmf_create_subsystem", 00:12:28.650 "req_id": 1 00:12:28.650 } 00:12:28.650 Got JSON-RPC error response 00:12:28.650 response: 00:12:28.650 { 00:12:28.650 "code": -32602, 00:12:28.650 "message": "Invalid cntlid range [0-65519]" 00:12:28.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:28.650 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11103 -i 65520 00:12:28.909 [2024-12-10 04:49:19.886438] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11103: invalid cntlid range [65520-65519] 00:12:28.909 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:28.909 { 00:12:28.909 "nqn": "nqn.2016-06.io.spdk:cnode11103", 00:12:28.909 "min_cntlid": 65520, 00:12:28.909 "method": "nvmf_create_subsystem", 00:12:28.909 "req_id": 1 00:12:28.909 } 00:12:28.909 Got JSON-RPC error response 00:12:28.909 response: 00:12:28.909 { 00:12:28.909 "code": -32602, 00:12:28.909 "message": "Invalid cntlid range [65520-65519]" 00:12:28.909 }' 00:12:28.909 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:28.909 { 00:12:28.909 "nqn": "nqn.2016-06.io.spdk:cnode11103", 00:12:28.909 "min_cntlid": 65520, 00:12:28.909 "method": "nvmf_create_subsystem", 00:12:28.909 "req_id": 1 00:12:28.909 } 00:12:28.909 Got JSON-RPC error response 00:12:28.909 response: 00:12:28.909 { 00:12:28.909 "code": -32602, 00:12:28.909 "message": "Invalid cntlid range [65520-65519]" 00:12:28.909 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:28.909 04:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32183 -I 0 00:12:29.168 [2024-12-10 04:49:20.107152] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32183: invalid cntlid range [1-0] 00:12:29.168 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:29.168 { 00:12:29.168 "nqn": "nqn.2016-06.io.spdk:cnode32183", 00:12:29.168 "max_cntlid": 0, 00:12:29.168 "method": "nvmf_create_subsystem", 00:12:29.168 "req_id": 1 00:12:29.168 } 00:12:29.168 Got JSON-RPC error response 00:12:29.168 response: 00:12:29.168 { 00:12:29.168 "code": -32602, 00:12:29.168 "message": "Invalid cntlid range [1-0]" 00:12:29.168 }' 00:12:29.168 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:29.168 { 00:12:29.168 "nqn": "nqn.2016-06.io.spdk:cnode32183", 00:12:29.168 "max_cntlid": 0, 00:12:29.168 "method": "nvmf_create_subsystem", 00:12:29.168 "req_id": 1 00:12:29.168 } 00:12:29.168 Got JSON-RPC error response 00:12:29.168 response: 00:12:29.168 { 00:12:29.168 "code": -32602, 00:12:29.168 "message": "Invalid cntlid range [1-0]" 00:12:29.168 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.168 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16774 -I 65520 00:12:29.427 [2024-12-10 04:49:20.319868] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16774: invalid cntlid range [1-65520] 00:12:29.427 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:29.427 { 00:12:29.427 "nqn": "nqn.2016-06.io.spdk:cnode16774", 00:12:29.427 "max_cntlid": 65520, 00:12:29.427 "method": "nvmf_create_subsystem", 00:12:29.427 "req_id": 1 00:12:29.427 } 00:12:29.427 Got JSON-RPC error response 00:12:29.427 response: 00:12:29.427 { 00:12:29.427 "code": -32602, 00:12:29.427 "message": "Invalid cntlid range [1-65520]" 00:12:29.427 }' 00:12:29.427 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:29.427 { 00:12:29.427 "nqn": "nqn.2016-06.io.spdk:cnode16774", 00:12:29.427 "max_cntlid": 65520, 00:12:29.427 "method": "nvmf_create_subsystem", 00:12:29.427 "req_id": 1 00:12:29.427 } 00:12:29.427 Got JSON-RPC error response 00:12:29.427 response: 00:12:29.427 { 00:12:29.427 "code": -32602, 00:12:29.427 "message": "Invalid cntlid range [1-65520]" 00:12:29.427 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.427 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27254 -i 6 -I 5 00:12:29.427 [2024-12-10 04:49:20.524586] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27254: invalid cntlid range [6-5] 00:12:29.427 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:29.427 { 00:12:29.427 "nqn": "nqn.2016-06.io.spdk:cnode27254", 00:12:29.427 "min_cntlid": 6, 00:12:29.427 "max_cntlid": 5, 00:12:29.427 "method": "nvmf_create_subsystem", 00:12:29.427 "req_id": 1 00:12:29.427 } 00:12:29.427 Got JSON-RPC error response 00:12:29.427 response: 00:12:29.427 { 00:12:29.427 "code": -32602, 00:12:29.427 "message": "Invalid cntlid range [6-5]" 00:12:29.427 }' 00:12:29.427 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:29.427 { 00:12:29.427 "nqn": "nqn.2016-06.io.spdk:cnode27254", 00:12:29.427 "min_cntlid": 6, 00:12:29.427 "max_cntlid": 5, 00:12:29.427 "method": "nvmf_create_subsystem", 00:12:29.427 "req_id": 1 00:12:29.427 } 00:12:29.427 Got JSON-RPC error response 00:12:29.427 response: 00:12:29.427 { 00:12:29.427 "code": -32602, 00:12:29.427 "message": "Invalid cntlid range [6-5]" 00:12:29.427 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.427 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:29.686 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:29.686 { 00:12:29.686 "name": "foobar", 00:12:29.686 "method": "nvmf_delete_target", 00:12:29.686 "req_id": 1 00:12:29.686 } 00:12:29.686 Got JSON-RPC error response 00:12:29.686 response: 00:12:29.686 { 00:12:29.686 "code": -32602, 00:12:29.686 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:29.686 }' 00:12:29.686 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:29.686 { 00:12:29.686 "name": "foobar", 00:12:29.686 "method": "nvmf_delete_target", 00:12:29.686 "req_id": 1 00:12:29.686 } 00:12:29.686 Got JSON-RPC error response 00:12:29.686 response: 00:12:29.686 { 00:12:29.686 "code": -32602, 00:12:29.686 "message": "The specified target doesn't exist, cannot delete it." 00:12:29.686 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:29.686 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:29.686 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:29.686 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.687 rmmod nvme_tcp 00:12:29.687 rmmod nvme_fabrics 00:12:29.687 rmmod nvme_keyring 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 566204 ']' 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 566204 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 566204 ']' 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 566204 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 566204 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 566204' 00:12:29.687 killing process with pid 566204 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 566204 00:12:29.687 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 566204 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.946 04:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.482 00:12:32.482 real 0m12.593s 00:12:32.482 user 0m21.234s 00:12:32.482 sys 0m5.407s 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.482 ************************************ 00:12:32.482 END TEST nvmf_invalid 00:12:32.482 ************************************ 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.482 ************************************ 00:12:32.482 START TEST nvmf_connect_stress 00:12:32.482 ************************************ 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:32.482 * Looking for test storage... 00:12:32.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.482 --rc genhtml_branch_coverage=1 00:12:32.482 --rc genhtml_function_coverage=1 00:12:32.482 --rc genhtml_legend=1 00:12:32.482 --rc geninfo_all_blocks=1 00:12:32.482 --rc geninfo_unexecuted_blocks=1 00:12:32.482 00:12:32.482 ' 00:12:32.482 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.482 --rc genhtml_branch_coverage=1 00:12:32.482 --rc genhtml_function_coverage=1 00:12:32.482 --rc genhtml_legend=1 00:12:32.482 --rc geninfo_all_blocks=1 00:12:32.482 --rc geninfo_unexecuted_blocks=1 00:12:32.483 00:12:32.483 ' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.483 --rc genhtml_branch_coverage=1 00:12:32.483 --rc genhtml_function_coverage=1 00:12:32.483 --rc genhtml_legend=1 00:12:32.483 --rc geninfo_all_blocks=1 00:12:32.483 --rc geninfo_unexecuted_blocks=1 00:12:32.483 00:12:32.483 ' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.483 --rc genhtml_branch_coverage=1 00:12:32.483 --rc genhtml_function_coverage=1 00:12:32.483 --rc genhtml_legend=1 00:12:32.483 --rc geninfo_all_blocks=1 00:12:32.483 --rc geninfo_unexecuted_blocks=1 00:12:32.483 00:12:32.483 ' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.483 04:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:39.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:39.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:39.053 Found net devices under 0000:af:00.0: cvl_0_0 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:39.053 Found net devices under 0000:af:00.1: cvl_0_1 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.053 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.054 04:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:12:39.054 00:12:39.054 --- 10.0.0.2 ping statistics --- 00:12:39.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.054 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:12:39.054 00:12:39.054 --- 10.0.0.1 ping statistics --- 00:12:39.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.054 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=570516 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 570516 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 570516 ']' 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 [2024-12-10 04:49:29.262624] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:39.054 [2024-12-10 04:49:29.262664] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.054 [2024-12-10 04:49:29.340460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.054 [2024-12-10 04:49:29.380031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.054 [2024-12-10 04:49:29.380065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.054 [2024-12-10 04:49:29.380072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.054 [2024-12-10 04:49:29.380078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.054 [2024-12-10 04:49:29.380082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.054 [2024-12-10 04:49:29.381426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.054 [2024-12-10 04:49:29.381536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.054 [2024-12-10 04:49:29.381537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 [2024-12-10 04:49:29.525784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 [2024-12-10 04:49:29.546014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.054 NULL1 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=570540 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.054 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.055 04:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.314 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.314 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:39.314 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.314 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.314 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.572 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.572 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:39.572 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.572 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.572 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.831 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:39.831 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.831 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.831 04:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.399 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.399 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:40.399 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.399 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.399 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.658 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.658 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:40.658 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.658 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.658 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.916 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.916 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:40.916 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.916 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.916 04:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.175 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.175 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:41.175 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.175 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.175 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.743 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.743 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:41.743 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.743 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.743 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.001 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.001 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:42.001 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.001 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.001 04:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.260 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.260 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:42.260 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.260 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.260 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.519 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.519 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:42.519 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.519 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.519 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.777 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.777 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:42.777 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.777 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.777 04:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.344 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.345 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:43.345 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.345 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.345 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.604 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.604 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:43.604 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.604 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.604 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.863 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.863 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:43.863 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.863 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.863 04:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.121 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.121 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:44.121 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.121 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.121 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.379 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.379 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:44.379 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.379 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.379 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.947 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.947 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:44.947 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.947 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.947 04:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.206 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.206 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:45.206 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.206 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.206 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.464 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.464 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:45.464 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.464 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.464 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.723 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.723 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:45.723 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.723 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.724 04:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.291 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.291 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:46.291 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.291 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.291 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.550 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.550 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:46.550 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.550 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.550 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.809 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.809 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:46.809 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.809 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.809 04:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.068 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.068 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:47.068 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.068 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.068 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.327 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.327 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:47.327 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.327 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.327 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.895 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.895 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:47.895 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.895 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.895 04:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.155 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.155 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:48.155 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.155 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.155 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.414 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.414 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:48.414 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.414 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.414 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.672 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 570540 00:12:48.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (570540) - No such process 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 570540 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.672 rmmod nvme_tcp 00:12:48.672 rmmod nvme_fabrics 00:12:48.672 rmmod nvme_keyring 00:12:48.672 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 570516 ']' 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 570516 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 570516 ']' 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 570516 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 570516 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 570516' 00:12:48.930 killing process with pid 570516 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 570516 00:12:48.930 04:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 570516 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.930 04:49:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.467 00:12:51.467 real 0m18.992s 00:12:51.467 user 0m39.501s 00:12:51.467 sys 0m8.473s 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.467 ************************************ 00:12:51.467 END TEST nvmf_connect_stress 00:12:51.467 ************************************ 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.467 ************************************ 00:12:51.467 START TEST nvmf_fused_ordering 00:12:51.467 ************************************ 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:51.467 * Looking for test storage... 00:12:51.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:51.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.467 --rc genhtml_branch_coverage=1 00:12:51.467 --rc genhtml_function_coverage=1 00:12:51.467 --rc genhtml_legend=1 00:12:51.467 --rc geninfo_all_blocks=1 00:12:51.467 --rc geninfo_unexecuted_blocks=1 00:12:51.467 00:12:51.467 ' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:51.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.467 --rc genhtml_branch_coverage=1 00:12:51.467 --rc genhtml_function_coverage=1 00:12:51.467 --rc genhtml_legend=1 00:12:51.467 --rc geninfo_all_blocks=1 00:12:51.467 --rc geninfo_unexecuted_blocks=1 00:12:51.467 00:12:51.467 ' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:51.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.467 --rc genhtml_branch_coverage=1 00:12:51.467 --rc genhtml_function_coverage=1 00:12:51.467 --rc genhtml_legend=1 00:12:51.467 --rc geninfo_all_blocks=1 00:12:51.467 --rc geninfo_unexecuted_blocks=1 00:12:51.467 00:12:51.467 ' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:51.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.467 --rc genhtml_branch_coverage=1 00:12:51.467 --rc genhtml_function_coverage=1 00:12:51.467 --rc genhtml_legend=1 00:12:51.467 --rc geninfo_all_blocks=1 00:12:51.467 --rc geninfo_unexecuted_blocks=1 00:12:51.467 00:12:51.467 ' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.467 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.468 04:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:58.039 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:58.039 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:58.039 Found net devices under 0000:af:00.0: cvl_0_0 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:58.039 Found net devices under 0000:af:00.1: cvl_0_1 00:12:58.039 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:58.040 04:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:58.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:12:58.040 00:12:58.040 --- 10.0.0.2 ping statistics --- 00:12:58.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.040 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:58.040 00:12:58.040 --- 10.0.0.1 ping statistics --- 00:12:58.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.040 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=575797 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 575797 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 575797 ']' 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 [2024-12-10 04:49:48.314175] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:58.040 [2024-12-10 04:49:48.314226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.040 [2024-12-10 04:49:48.392867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.040 [2024-12-10 04:49:48.432256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.040 [2024-12-10 04:49:48.432290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.040 [2024-12-10 04:49:48.432298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.040 [2024-12-10 04:49:48.432303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.040 [2024-12-10 04:49:48.432309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.040 [2024-12-10 04:49:48.432775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 [2024-12-10 04:49:48.571960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 [2024-12-10 04:49:48.596148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 NULL1 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.040 04:49:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:58.040 [2024-12-10 04:49:48.657192] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:58.041 [2024-12-10 04:49:48.657223] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575824 ] 00:12:58.041 Attached to nqn.2016-06.io.spdk:cnode1 00:12:58.041 Namespace ID: 1 size: 1GB 00:12:58.041 fused_ordering(0) 00:12:58.041 fused_ordering(1) 00:12:58.041 fused_ordering(2) 00:12:58.041 fused_ordering(3) 00:12:58.041 fused_ordering(4) 00:12:58.041 fused_ordering(5) 00:12:58.041 fused_ordering(6) 00:12:58.041 fused_ordering(7) 00:12:58.041 fused_ordering(8) 00:12:58.041 fused_ordering(9) 00:12:58.041 fused_ordering(10) 00:12:58.041 fused_ordering(11) 00:12:58.041 fused_ordering(12) 00:12:58.041 fused_ordering(13) 00:12:58.041 fused_ordering(14) 00:12:58.041 fused_ordering(15) 00:12:58.041 fused_ordering(16) 00:12:58.041 fused_ordering(17) 00:12:58.041 fused_ordering(18) 00:12:58.041 fused_ordering(19) 00:12:58.041 fused_ordering(20) 00:12:58.041 fused_ordering(21) 00:12:58.041 fused_ordering(22) 00:12:58.041 fused_ordering(23) 00:12:58.041 fused_ordering(24) 00:12:58.041 fused_ordering(25) 00:12:58.041 fused_ordering(26) 00:12:58.041 fused_ordering(27) 00:12:58.041 fused_ordering(28) 00:12:58.041 fused_ordering(29) 00:12:58.041 fused_ordering(30) 00:12:58.041 fused_ordering(31) 00:12:58.041 fused_ordering(32) 00:12:58.041 fused_ordering(33) 00:12:58.041 fused_ordering(34) 00:12:58.041 fused_ordering(35) 00:12:58.041 fused_ordering(36) 00:12:58.041 fused_ordering(37) 00:12:58.041 fused_ordering(38) 00:12:58.041 fused_ordering(39) 00:12:58.041 fused_ordering(40) 00:12:58.041 fused_ordering(41) 00:12:58.041 fused_ordering(42) 00:12:58.041 fused_ordering(43) 00:12:58.041 fused_ordering(44) 00:12:58.041 fused_ordering(45) 00:12:58.041 fused_ordering(46) 00:12:58.041 fused_ordering(47) 00:12:58.041 fused_ordering(48) 00:12:58.041 fused_ordering(49) 00:12:58.041 fused_ordering(50) 00:12:58.041 fused_ordering(51) 00:12:58.041 fused_ordering(52) 00:12:58.041 fused_ordering(53) 00:12:58.041 fused_ordering(54) 00:12:58.041 fused_ordering(55) 00:12:58.041 fused_ordering(56) 00:12:58.041 fused_ordering(57) 00:12:58.041 fused_ordering(58) 00:12:58.041 fused_ordering(59) 00:12:58.041 fused_ordering(60) 00:12:58.041 fused_ordering(61) 00:12:58.041 fused_ordering(62) 00:12:58.041 fused_ordering(63) 00:12:58.041 fused_ordering(64) 00:12:58.041 fused_ordering(65) 00:12:58.041 fused_ordering(66) 00:12:58.041 fused_ordering(67) 00:12:58.041 fused_ordering(68) 00:12:58.041 fused_ordering(69) 00:12:58.041 fused_ordering(70) 00:12:58.041 fused_ordering(71) 00:12:58.041 fused_ordering(72) 00:12:58.041 fused_ordering(73) 00:12:58.041 fused_ordering(74) 00:12:58.041 fused_ordering(75) 00:12:58.041 fused_ordering(76) 00:12:58.041 fused_ordering(77) 00:12:58.041 fused_ordering(78) 00:12:58.041 fused_ordering(79) 00:12:58.041 fused_ordering(80) 00:12:58.041 fused_ordering(81) 00:12:58.041 fused_ordering(82) 00:12:58.041 fused_ordering(83) 00:12:58.041 fused_ordering(84) 00:12:58.041 fused_ordering(85) 00:12:58.041 fused_ordering(86) 00:12:58.041 fused_ordering(87) 00:12:58.041 fused_ordering(88) 00:12:58.041 fused_ordering(89) 00:12:58.041 fused_ordering(90) 00:12:58.041 fused_ordering(91) 00:12:58.041 fused_ordering(92) 00:12:58.041 fused_ordering(93) 00:12:58.041 fused_ordering(94) 00:12:58.041 fused_ordering(95) 00:12:58.041 fused_ordering(96) 00:12:58.041 fused_ordering(97) 00:12:58.041 fused_ordering(98) 00:12:58.041 fused_ordering(99) 00:12:58.041 fused_ordering(100) 00:12:58.041 fused_ordering(101) 00:12:58.041 fused_ordering(102) 00:12:58.041 fused_ordering(103) 00:12:58.041 fused_ordering(104) 00:12:58.041 fused_ordering(105) 00:12:58.041 fused_ordering(106) 00:12:58.041 fused_ordering(107) 00:12:58.041 fused_ordering(108) 00:12:58.041 fused_ordering(109) 00:12:58.041 fused_ordering(110) 00:12:58.041 fused_ordering(111) 00:12:58.041 fused_ordering(112) 00:12:58.041 fused_ordering(113) 00:12:58.041 fused_ordering(114) 00:12:58.041 fused_ordering(115) 00:12:58.041 fused_ordering(116) 00:12:58.041 fused_ordering(117) 00:12:58.041 fused_ordering(118) 00:12:58.041 fused_ordering(119) 00:12:58.041 fused_ordering(120) 00:12:58.041 fused_ordering(121) 00:12:58.041 fused_ordering(122) 00:12:58.041 fused_ordering(123) 00:12:58.041 fused_ordering(124) 00:12:58.041 fused_ordering(125) 00:12:58.041 fused_ordering(126) 00:12:58.041 fused_ordering(127) 00:12:58.041 fused_ordering(128) 00:12:58.041 fused_ordering(129) 00:12:58.041 fused_ordering(130) 00:12:58.041 fused_ordering(131) 00:12:58.041 fused_ordering(132) 00:12:58.041 fused_ordering(133) 00:12:58.041 fused_ordering(134) 00:12:58.041 fused_ordering(135) 00:12:58.041 fused_ordering(136) 00:12:58.041 fused_ordering(137) 00:12:58.041 fused_ordering(138) 00:12:58.041 fused_ordering(139) 00:12:58.041 fused_ordering(140) 00:12:58.041 fused_ordering(141) 00:12:58.041 fused_ordering(142) 00:12:58.041 fused_ordering(143) 00:12:58.041 fused_ordering(144) 00:12:58.041 fused_ordering(145) 00:12:58.041 fused_ordering(146) 00:12:58.041 fused_ordering(147) 00:12:58.041 fused_ordering(148) 00:12:58.041 fused_ordering(149) 00:12:58.041 fused_ordering(150) 00:12:58.041 fused_ordering(151) 00:12:58.041 fused_ordering(152) 00:12:58.041 fused_ordering(153) 00:12:58.041 fused_ordering(154) 00:12:58.041 fused_ordering(155) 00:12:58.041 fused_ordering(156) 00:12:58.041 fused_ordering(157) 00:12:58.041 fused_ordering(158) 00:12:58.041 fused_ordering(159) 00:12:58.041 fused_ordering(160) 00:12:58.041 fused_ordering(161) 00:12:58.041 fused_ordering(162) 00:12:58.041 fused_ordering(163) 00:12:58.041 fused_ordering(164) 00:12:58.041 fused_ordering(165) 00:12:58.041 fused_ordering(166) 00:12:58.041 fused_ordering(167) 00:12:58.041 fused_ordering(168) 00:12:58.041 fused_ordering(169) 00:12:58.041 fused_ordering(170) 00:12:58.041 fused_ordering(171) 00:12:58.041 fused_ordering(172) 00:12:58.041 fused_ordering(173) 00:12:58.041 fused_ordering(174) 00:12:58.041 fused_ordering(175) 00:12:58.041 fused_ordering(176) 00:12:58.041 fused_ordering(177) 00:12:58.041 fused_ordering(178) 00:12:58.041 fused_ordering(179) 00:12:58.041 fused_ordering(180) 00:12:58.041 fused_ordering(181) 00:12:58.041 fused_ordering(182) 00:12:58.041 fused_ordering(183) 00:12:58.041 fused_ordering(184) 00:12:58.041 fused_ordering(185) 00:12:58.041 fused_ordering(186) 00:12:58.041 fused_ordering(187) 00:12:58.041 fused_ordering(188) 00:12:58.041 fused_ordering(189) 00:12:58.041 fused_ordering(190) 00:12:58.041 fused_ordering(191) 00:12:58.041 fused_ordering(192) 00:12:58.041 fused_ordering(193) 00:12:58.041 fused_ordering(194) 00:12:58.041 fused_ordering(195) 00:12:58.041 fused_ordering(196) 00:12:58.041 fused_ordering(197) 00:12:58.041 fused_ordering(198) 00:12:58.041 fused_ordering(199) 00:12:58.041 fused_ordering(200) 00:12:58.041 fused_ordering(201) 00:12:58.041 fused_ordering(202) 00:12:58.041 fused_ordering(203) 00:12:58.041 fused_ordering(204) 00:12:58.041 fused_ordering(205) 00:12:58.301 fused_ordering(206) 00:12:58.301 fused_ordering(207) 00:12:58.301 fused_ordering(208) 00:12:58.301 fused_ordering(209) 00:12:58.301 fused_ordering(210) 00:12:58.301 fused_ordering(211) 00:12:58.301 fused_ordering(212) 00:12:58.301 fused_ordering(213) 00:12:58.301 fused_ordering(214) 00:12:58.301 fused_ordering(215) 00:12:58.301 fused_ordering(216) 00:12:58.301 fused_ordering(217) 00:12:58.301 fused_ordering(218) 00:12:58.301 fused_ordering(219) 00:12:58.301 fused_ordering(220) 00:12:58.301 fused_ordering(221) 00:12:58.301 fused_ordering(222) 00:12:58.301 fused_ordering(223) 00:12:58.301 fused_ordering(224) 00:12:58.301 fused_ordering(225) 00:12:58.301 fused_ordering(226) 00:12:58.301 fused_ordering(227) 00:12:58.301 fused_ordering(228) 00:12:58.301 fused_ordering(229) 00:12:58.301 fused_ordering(230) 00:12:58.301 fused_ordering(231) 00:12:58.301 fused_ordering(232) 00:12:58.301 fused_ordering(233) 00:12:58.301 fused_ordering(234) 00:12:58.301 fused_ordering(235) 00:12:58.301 fused_ordering(236) 00:12:58.301 fused_ordering(237) 00:12:58.301 fused_ordering(238) 00:12:58.301 fused_ordering(239) 00:12:58.301 fused_ordering(240) 00:12:58.301 fused_ordering(241) 00:12:58.301 fused_ordering(242) 00:12:58.301 fused_ordering(243) 00:12:58.301 fused_ordering(244) 00:12:58.301 fused_ordering(245) 00:12:58.301 fused_ordering(246) 00:12:58.301 fused_ordering(247) 00:12:58.301 fused_ordering(248) 00:12:58.301 fused_ordering(249) 00:12:58.301 fused_ordering(250) 00:12:58.301 fused_ordering(251) 00:12:58.301 fused_ordering(252) 00:12:58.301 fused_ordering(253) 00:12:58.301 fused_ordering(254) 00:12:58.301 fused_ordering(255) 00:12:58.301 fused_ordering(256) 00:12:58.301 fused_ordering(257) 00:12:58.301 fused_ordering(258) 00:12:58.301 fused_ordering(259) 00:12:58.301 fused_ordering(260) 00:12:58.301 fused_ordering(261) 00:12:58.301 fused_ordering(262) 00:12:58.301 fused_ordering(263) 00:12:58.301 fused_ordering(264) 00:12:58.301 fused_ordering(265) 00:12:58.301 fused_ordering(266) 00:12:58.301 fused_ordering(267) 00:12:58.301 fused_ordering(268) 00:12:58.301 fused_ordering(269) 00:12:58.301 fused_ordering(270) 00:12:58.301 fused_ordering(271) 00:12:58.301 fused_ordering(272) 00:12:58.301 fused_ordering(273) 00:12:58.301 fused_ordering(274) 00:12:58.301 fused_ordering(275) 00:12:58.301 fused_ordering(276) 00:12:58.301 fused_ordering(277) 00:12:58.301 fused_ordering(278) 00:12:58.301 fused_ordering(279) 00:12:58.301 fused_ordering(280) 00:12:58.301 fused_ordering(281) 00:12:58.301 fused_ordering(282) 00:12:58.301 fused_ordering(283) 00:12:58.301 fused_ordering(284) 00:12:58.301 fused_ordering(285) 00:12:58.301 fused_ordering(286) 00:12:58.301 fused_ordering(287) 00:12:58.301 fused_ordering(288) 00:12:58.301 fused_ordering(289) 00:12:58.301 fused_ordering(290) 00:12:58.301 fused_ordering(291) 00:12:58.301 fused_ordering(292) 00:12:58.301 fused_ordering(293) 00:12:58.301 fused_ordering(294) 00:12:58.301 fused_ordering(295) 00:12:58.301 fused_ordering(296) 00:12:58.301 fused_ordering(297) 00:12:58.301 fused_ordering(298) 00:12:58.301 fused_ordering(299) 00:12:58.301 fused_ordering(300) 00:12:58.301 fused_ordering(301) 00:12:58.301 fused_ordering(302) 00:12:58.301 fused_ordering(303) 00:12:58.301 fused_ordering(304) 00:12:58.301 fused_ordering(305) 00:12:58.301 fused_ordering(306) 00:12:58.301 fused_ordering(307) 00:12:58.301 fused_ordering(308) 00:12:58.301 fused_ordering(309) 00:12:58.301 fused_ordering(310) 00:12:58.301 fused_ordering(311) 00:12:58.301 fused_ordering(312) 00:12:58.301 fused_ordering(313) 00:12:58.301 fused_ordering(314) 00:12:58.301 fused_ordering(315) 00:12:58.301 fused_ordering(316) 00:12:58.301 fused_ordering(317) 00:12:58.301 fused_ordering(318) 00:12:58.301 fused_ordering(319) 00:12:58.301 fused_ordering(320) 00:12:58.301 fused_ordering(321) 00:12:58.301 fused_ordering(322) 00:12:58.301 fused_ordering(323) 00:12:58.301 fused_ordering(324) 00:12:58.301 fused_ordering(325) 00:12:58.301 fused_ordering(326) 00:12:58.301 fused_ordering(327) 00:12:58.301 fused_ordering(328) 00:12:58.301 fused_ordering(329) 00:12:58.301 fused_ordering(330) 00:12:58.301 fused_ordering(331) 00:12:58.301 fused_ordering(332) 00:12:58.301 fused_ordering(333) 00:12:58.301 fused_ordering(334) 00:12:58.301 fused_ordering(335) 00:12:58.301 fused_ordering(336) 00:12:58.301 fused_ordering(337) 00:12:58.301 fused_ordering(338) 00:12:58.301 fused_ordering(339) 00:12:58.301 fused_ordering(340) 00:12:58.301 fused_ordering(341) 00:12:58.301 fused_ordering(342) 00:12:58.301 fused_ordering(343) 00:12:58.301 fused_ordering(344) 00:12:58.301 fused_ordering(345) 00:12:58.301 fused_ordering(346) 00:12:58.301 fused_ordering(347) 00:12:58.301 fused_ordering(348) 00:12:58.301 fused_ordering(349) 00:12:58.301 fused_ordering(350) 00:12:58.301 fused_ordering(351) 00:12:58.301 fused_ordering(352) 00:12:58.301 fused_ordering(353) 00:12:58.301 fused_ordering(354) 00:12:58.301 fused_ordering(355) 00:12:58.301 fused_ordering(356) 00:12:58.301 fused_ordering(357) 00:12:58.301 fused_ordering(358) 00:12:58.301 fused_ordering(359) 00:12:58.301 fused_ordering(360) 00:12:58.301 fused_ordering(361) 00:12:58.301 fused_ordering(362) 00:12:58.301 fused_ordering(363) 00:12:58.301 fused_ordering(364) 00:12:58.301 fused_ordering(365) 00:12:58.301 fused_ordering(366) 00:12:58.301 fused_ordering(367) 00:12:58.301 fused_ordering(368) 00:12:58.301 fused_ordering(369) 00:12:58.301 fused_ordering(370) 00:12:58.301 fused_ordering(371) 00:12:58.301 fused_ordering(372) 00:12:58.301 fused_ordering(373) 00:12:58.301 fused_ordering(374) 00:12:58.301 fused_ordering(375) 00:12:58.301 fused_ordering(376) 00:12:58.301 fused_ordering(377) 00:12:58.301 fused_ordering(378) 00:12:58.301 fused_ordering(379) 00:12:58.301 fused_ordering(380) 00:12:58.301 fused_ordering(381) 00:12:58.301 fused_ordering(382) 00:12:58.301 fused_ordering(383) 00:12:58.301 fused_ordering(384) 00:12:58.301 fused_ordering(385) 00:12:58.301 fused_ordering(386) 00:12:58.301 fused_ordering(387) 00:12:58.301 fused_ordering(388) 00:12:58.301 fused_ordering(389) 00:12:58.301 fused_ordering(390) 00:12:58.301 fused_ordering(391) 00:12:58.301 fused_ordering(392) 00:12:58.301 fused_ordering(393) 00:12:58.301 fused_ordering(394) 00:12:58.301 fused_ordering(395) 00:12:58.301 fused_ordering(396) 00:12:58.301 fused_ordering(397) 00:12:58.301 fused_ordering(398) 00:12:58.301 fused_ordering(399) 00:12:58.301 fused_ordering(400) 00:12:58.301 fused_ordering(401) 00:12:58.301 fused_ordering(402) 00:12:58.301 fused_ordering(403) 00:12:58.301 fused_ordering(404) 00:12:58.301 fused_ordering(405) 00:12:58.301 fused_ordering(406) 00:12:58.301 fused_ordering(407) 00:12:58.301 fused_ordering(408) 00:12:58.301 fused_ordering(409) 00:12:58.301 fused_ordering(410) 00:12:58.561 fused_ordering(411) 00:12:58.561 fused_ordering(412) 00:12:58.561 fused_ordering(413) 00:12:58.561 fused_ordering(414) 00:12:58.561 fused_ordering(415) 00:12:58.561 fused_ordering(416) 00:12:58.561 fused_ordering(417) 00:12:58.561 fused_ordering(418) 00:12:58.561 fused_ordering(419) 00:12:58.561 fused_ordering(420) 00:12:58.561 fused_ordering(421) 00:12:58.561 fused_ordering(422) 00:12:58.561 fused_ordering(423) 00:12:58.561 fused_ordering(424) 00:12:58.561 fused_ordering(425) 00:12:58.561 fused_ordering(426) 00:12:58.561 fused_ordering(427) 00:12:58.561 fused_ordering(428) 00:12:58.561 fused_ordering(429) 00:12:58.561 fused_ordering(430) 00:12:58.561 fused_ordering(431) 00:12:58.561 fused_ordering(432) 00:12:58.561 fused_ordering(433) 00:12:58.561 fused_ordering(434) 00:12:58.561 fused_ordering(435) 00:12:58.561 fused_ordering(436) 00:12:58.561 fused_ordering(437) 00:12:58.561 fused_ordering(438) 00:12:58.561 fused_ordering(439) 00:12:58.561 fused_ordering(440) 00:12:58.561 fused_ordering(441) 00:12:58.561 fused_ordering(442) 00:12:58.561 fused_ordering(443) 00:12:58.561 fused_ordering(444) 00:12:58.561 fused_ordering(445) 00:12:58.561 fused_ordering(446) 00:12:58.561 fused_ordering(447) 00:12:58.561 fused_ordering(448) 00:12:58.561 fused_ordering(449) 00:12:58.561 fused_ordering(450) 00:12:58.561 fused_ordering(451) 00:12:58.561 fused_ordering(452) 00:12:58.561 fused_ordering(453) 00:12:58.561 fused_ordering(454) 00:12:58.561 fused_ordering(455) 00:12:58.561 fused_ordering(456) 00:12:58.561 fused_ordering(457) 00:12:58.561 fused_ordering(458) 00:12:58.561 fused_ordering(459) 00:12:58.561 fused_ordering(460) 00:12:58.561 fused_ordering(461) 00:12:58.561 fused_ordering(462) 00:12:58.561 fused_ordering(463) 00:12:58.561 fused_ordering(464) 00:12:58.561 fused_ordering(465) 00:12:58.561 fused_ordering(466) 00:12:58.561 fused_ordering(467) 00:12:58.561 fused_ordering(468) 00:12:58.561 fused_ordering(469) 00:12:58.561 fused_ordering(470) 00:12:58.561 fused_ordering(471) 00:12:58.561 fused_ordering(472) 00:12:58.561 fused_ordering(473) 00:12:58.561 fused_ordering(474) 00:12:58.561 fused_ordering(475) 00:12:58.561 fused_ordering(476) 00:12:58.561 fused_ordering(477) 00:12:58.561 fused_ordering(478) 00:12:58.561 fused_ordering(479) 00:12:58.561 fused_ordering(480) 00:12:58.561 fused_ordering(481) 00:12:58.561 fused_ordering(482) 00:12:58.561 fused_ordering(483) 00:12:58.561 fused_ordering(484) 00:12:58.561 fused_ordering(485) 00:12:58.561 fused_ordering(486) 00:12:58.561 fused_ordering(487) 00:12:58.561 fused_ordering(488) 00:12:58.561 fused_ordering(489) 00:12:58.561 fused_ordering(490) 00:12:58.561 fused_ordering(491) 00:12:58.561 fused_ordering(492) 00:12:58.561 fused_ordering(493) 00:12:58.561 fused_ordering(494) 00:12:58.561 fused_ordering(495) 00:12:58.561 fused_ordering(496) 00:12:58.561 fused_ordering(497) 00:12:58.561 fused_ordering(498) 00:12:58.561 fused_ordering(499) 00:12:58.561 fused_ordering(500) 00:12:58.561 fused_ordering(501) 00:12:58.561 fused_ordering(502) 00:12:58.561 fused_ordering(503) 00:12:58.561 fused_ordering(504) 00:12:58.561 fused_ordering(505) 00:12:58.561 fused_ordering(506) 00:12:58.561 fused_ordering(507) 00:12:58.561 fused_ordering(508) 00:12:58.561 fused_ordering(509) 00:12:58.561 fused_ordering(510) 00:12:58.561 fused_ordering(511) 00:12:58.561 fused_ordering(512) 00:12:58.561 fused_ordering(513) 00:12:58.561 fused_ordering(514) 00:12:58.561 fused_ordering(515) 00:12:58.561 fused_ordering(516) 00:12:58.561 fused_ordering(517) 00:12:58.561 fused_ordering(518) 00:12:58.561 fused_ordering(519) 00:12:58.561 fused_ordering(520) 00:12:58.561 fused_ordering(521) 00:12:58.561 fused_ordering(522) 00:12:58.561 fused_ordering(523) 00:12:58.561 fused_ordering(524) 00:12:58.561 fused_ordering(525) 00:12:58.561 fused_ordering(526) 00:12:58.561 fused_ordering(527) 00:12:58.561 fused_ordering(528) 00:12:58.561 fused_ordering(529) 00:12:58.561 fused_ordering(530) 00:12:58.561 fused_ordering(531) 00:12:58.561 fused_ordering(532) 00:12:58.561 fused_ordering(533) 00:12:58.561 fused_ordering(534) 00:12:58.561 fused_ordering(535) 00:12:58.561 fused_ordering(536) 00:12:58.561 fused_ordering(537) 00:12:58.561 fused_ordering(538) 00:12:58.561 fused_ordering(539) 00:12:58.561 fused_ordering(540) 00:12:58.561 fused_ordering(541) 00:12:58.561 fused_ordering(542) 00:12:58.561 fused_ordering(543) 00:12:58.561 fused_ordering(544) 00:12:58.561 fused_ordering(545) 00:12:58.561 fused_ordering(546) 00:12:58.561 fused_ordering(547) 00:12:58.561 fused_ordering(548) 00:12:58.561 fused_ordering(549) 00:12:58.561 fused_ordering(550) 00:12:58.561 fused_ordering(551) 00:12:58.561 fused_ordering(552) 00:12:58.561 fused_ordering(553) 00:12:58.561 fused_ordering(554) 00:12:58.561 fused_ordering(555) 00:12:58.561 fused_ordering(556) 00:12:58.561 fused_ordering(557) 00:12:58.561 fused_ordering(558) 00:12:58.561 fused_ordering(559) 00:12:58.561 fused_ordering(560) 00:12:58.561 fused_ordering(561) 00:12:58.561 fused_ordering(562) 00:12:58.561 fused_ordering(563) 00:12:58.561 fused_ordering(564) 00:12:58.561 fused_ordering(565) 00:12:58.561 fused_ordering(566) 00:12:58.561 fused_ordering(567) 00:12:58.561 fused_ordering(568) 00:12:58.561 fused_ordering(569) 00:12:58.561 fused_ordering(570) 00:12:58.561 fused_ordering(571) 00:12:58.561 fused_ordering(572) 00:12:58.561 fused_ordering(573) 00:12:58.561 fused_ordering(574) 00:12:58.561 fused_ordering(575) 00:12:58.561 fused_ordering(576) 00:12:58.561 fused_ordering(577) 00:12:58.561 fused_ordering(578) 00:12:58.561 fused_ordering(579) 00:12:58.561 fused_ordering(580) 00:12:58.561 fused_ordering(581) 00:12:58.561 fused_ordering(582) 00:12:58.561 fused_ordering(583) 00:12:58.561 fused_ordering(584) 00:12:58.561 fused_ordering(585) 00:12:58.561 fused_ordering(586) 00:12:58.561 fused_ordering(587) 00:12:58.561 fused_ordering(588) 00:12:58.561 fused_ordering(589) 00:12:58.561 fused_ordering(590) 00:12:58.561 fused_ordering(591) 00:12:58.561 fused_ordering(592) 00:12:58.561 fused_ordering(593) 00:12:58.561 fused_ordering(594) 00:12:58.561 fused_ordering(595) 00:12:58.561 fused_ordering(596) 00:12:58.561 fused_ordering(597) 00:12:58.561 fused_ordering(598) 00:12:58.561 fused_ordering(599) 00:12:58.561 fused_ordering(600) 00:12:58.561 fused_ordering(601) 00:12:58.561 fused_ordering(602) 00:12:58.561 fused_ordering(603) 00:12:58.561 fused_ordering(604) 00:12:58.561 fused_ordering(605) 00:12:58.561 fused_ordering(606) 00:12:58.561 fused_ordering(607) 00:12:58.561 fused_ordering(608) 00:12:58.561 fused_ordering(609) 00:12:58.561 fused_ordering(610) 00:12:58.561 fused_ordering(611) 00:12:58.561 fused_ordering(612) 00:12:58.561 fused_ordering(613) 00:12:58.561 fused_ordering(614) 00:12:58.561 fused_ordering(615) 00:12:59.131 fused_ordering(616) 00:12:59.131 fused_ordering(617) 00:12:59.131 fused_ordering(618) 00:12:59.131 fused_ordering(619) 00:12:59.131 fused_ordering(620) 00:12:59.131 fused_ordering(621) 00:12:59.131 fused_ordering(622) 00:12:59.131 fused_ordering(623) 00:12:59.131 fused_ordering(624) 00:12:59.131 fused_ordering(625) 00:12:59.131 fused_ordering(626) 00:12:59.131 fused_ordering(627) 00:12:59.131 fused_ordering(628) 00:12:59.131 fused_ordering(629) 00:12:59.131 fused_ordering(630) 00:12:59.131 fused_ordering(631) 00:12:59.131 fused_ordering(632) 00:12:59.131 fused_ordering(633) 00:12:59.131 fused_ordering(634) 00:12:59.131 fused_ordering(635) 00:12:59.131 fused_ordering(636) 00:12:59.131 fused_ordering(637) 00:12:59.131 fused_ordering(638) 00:12:59.131 fused_ordering(639) 00:12:59.131 fused_ordering(640) 00:12:59.131 fused_ordering(641) 00:12:59.131 fused_ordering(642) 00:12:59.131 fused_ordering(643) 00:12:59.131 fused_ordering(644) 00:12:59.131 fused_ordering(645) 00:12:59.131 fused_ordering(646) 00:12:59.131 fused_ordering(647) 00:12:59.131 fused_ordering(648) 00:12:59.131 fused_ordering(649) 00:12:59.131 fused_ordering(650) 00:12:59.131 fused_ordering(651) 00:12:59.131 fused_ordering(652) 00:12:59.131 fused_ordering(653) 00:12:59.131 fused_ordering(654) 00:12:59.131 fused_ordering(655) 00:12:59.131 fused_ordering(656) 00:12:59.131 fused_ordering(657) 00:12:59.131 fused_ordering(658) 00:12:59.131 fused_ordering(659) 00:12:59.131 fused_ordering(660) 00:12:59.131 fused_ordering(661) 00:12:59.131 fused_ordering(662) 00:12:59.131 fused_ordering(663) 00:12:59.131 fused_ordering(664) 00:12:59.131 fused_ordering(665) 00:12:59.131 fused_ordering(666) 00:12:59.131 fused_ordering(667) 00:12:59.131 fused_ordering(668) 00:12:59.131 fused_ordering(669) 00:12:59.131 fused_ordering(670) 00:12:59.131 fused_ordering(671) 00:12:59.131 fused_ordering(672) 00:12:59.131 fused_ordering(673) 00:12:59.131 fused_ordering(674) 00:12:59.131 fused_ordering(675) 00:12:59.131 fused_ordering(676) 00:12:59.131 fused_ordering(677) 00:12:59.131 fused_ordering(678) 00:12:59.131 fused_ordering(679) 00:12:59.131 fused_ordering(680) 00:12:59.131 fused_ordering(681) 00:12:59.131 fused_ordering(682) 00:12:59.131 fused_ordering(683) 00:12:59.131 fused_ordering(684) 00:12:59.131 fused_ordering(685) 00:12:59.131 fused_ordering(686) 00:12:59.131 fused_ordering(687) 00:12:59.131 fused_ordering(688) 00:12:59.131 fused_ordering(689) 00:12:59.131 fused_ordering(690) 00:12:59.131 fused_ordering(691) 00:12:59.131 fused_ordering(692) 00:12:59.131 fused_ordering(693) 00:12:59.131 fused_ordering(694) 00:12:59.131 fused_ordering(695) 00:12:59.131 fused_ordering(696) 00:12:59.131 fused_ordering(697) 00:12:59.131 fused_ordering(698) 00:12:59.131 fused_ordering(699) 00:12:59.131 fused_ordering(700) 00:12:59.131 fused_ordering(701) 00:12:59.131 fused_ordering(702) 00:12:59.131 fused_ordering(703) 00:12:59.131 fused_ordering(704) 00:12:59.131 fused_ordering(705) 00:12:59.131 fused_ordering(706) 00:12:59.131 fused_ordering(707) 00:12:59.131 fused_ordering(708) 00:12:59.131 fused_ordering(709) 00:12:59.131 fused_ordering(710) 00:12:59.131 fused_ordering(711) 00:12:59.131 fused_ordering(712) 00:12:59.131 fused_ordering(713) 00:12:59.131 fused_ordering(714) 00:12:59.131 fused_ordering(715) 00:12:59.131 fused_ordering(716) 00:12:59.131 fused_ordering(717) 00:12:59.131 fused_ordering(718) 00:12:59.131 fused_ordering(719) 00:12:59.131 fused_ordering(720) 00:12:59.131 fused_ordering(721) 00:12:59.131 fused_ordering(722) 00:12:59.131 fused_ordering(723) 00:12:59.131 fused_ordering(724) 00:12:59.131 fused_ordering(725) 00:12:59.131 fused_ordering(726) 00:12:59.131 fused_ordering(727) 00:12:59.131 fused_ordering(728) 00:12:59.131 fused_ordering(729) 00:12:59.131 fused_ordering(730) 00:12:59.131 fused_ordering(731) 00:12:59.131 fused_ordering(732) 00:12:59.131 fused_ordering(733) 00:12:59.131 fused_ordering(734) 00:12:59.131 fused_ordering(735) 00:12:59.131 fused_ordering(736) 00:12:59.131 fused_ordering(737) 00:12:59.131 fused_ordering(738) 00:12:59.131 fused_ordering(739) 00:12:59.131 fused_ordering(740) 00:12:59.131 fused_ordering(741) 00:12:59.131 fused_ordering(742) 00:12:59.131 fused_ordering(743) 00:12:59.131 fused_ordering(744) 00:12:59.131 fused_ordering(745) 00:12:59.131 fused_ordering(746) 00:12:59.131 fused_ordering(747) 00:12:59.131 fused_ordering(748) 00:12:59.131 fused_ordering(749) 00:12:59.131 fused_ordering(750) 00:12:59.131 fused_ordering(751) 00:12:59.131 fused_ordering(752) 00:12:59.131 fused_ordering(753) 00:12:59.131 fused_ordering(754) 00:12:59.131 fused_ordering(755) 00:12:59.131 fused_ordering(756) 00:12:59.131 fused_ordering(757) 00:12:59.131 fused_ordering(758) 00:12:59.131 fused_ordering(759) 00:12:59.131 fused_ordering(760) 00:12:59.131 fused_ordering(761) 00:12:59.131 fused_ordering(762) 00:12:59.131 fused_ordering(763) 00:12:59.131 fused_ordering(764) 00:12:59.131 fused_ordering(765) 00:12:59.131 fused_ordering(766) 00:12:59.131 fused_ordering(767) 00:12:59.131 fused_ordering(768) 00:12:59.131 fused_ordering(769) 00:12:59.131 fused_ordering(770) 00:12:59.131 fused_ordering(771) 00:12:59.131 fused_ordering(772) 00:12:59.131 fused_ordering(773) 00:12:59.131 fused_ordering(774) 00:12:59.131 fused_ordering(775) 00:12:59.131 fused_ordering(776) 00:12:59.131 fused_ordering(777) 00:12:59.131 fused_ordering(778) 00:12:59.131 fused_ordering(779) 00:12:59.131 fused_ordering(780) 00:12:59.131 fused_ordering(781) 00:12:59.131 fused_ordering(782) 00:12:59.131 fused_ordering(783) 00:12:59.131 fused_ordering(784) 00:12:59.131 fused_ordering(785) 00:12:59.131 fused_ordering(786) 00:12:59.131 fused_ordering(787) 00:12:59.131 fused_ordering(788) 00:12:59.131 fused_ordering(789) 00:12:59.131 fused_ordering(790) 00:12:59.131 fused_ordering(791) 00:12:59.131 fused_ordering(792) 00:12:59.131 fused_ordering(793) 00:12:59.131 fused_ordering(794) 00:12:59.131 fused_ordering(795) 00:12:59.131 fused_ordering(796) 00:12:59.131 fused_ordering(797) 00:12:59.131 fused_ordering(798) 00:12:59.131 fused_ordering(799) 00:12:59.131 fused_ordering(800) 00:12:59.131 fused_ordering(801) 00:12:59.131 fused_ordering(802) 00:12:59.131 fused_ordering(803) 00:12:59.131 fused_ordering(804) 00:12:59.131 fused_ordering(805) 00:12:59.131 fused_ordering(806) 00:12:59.131 fused_ordering(807) 00:12:59.131 fused_ordering(808) 00:12:59.131 fused_ordering(809) 00:12:59.131 fused_ordering(810) 00:12:59.131 fused_ordering(811) 00:12:59.131 fused_ordering(812) 00:12:59.131 fused_ordering(813) 00:12:59.131 fused_ordering(814) 00:12:59.131 fused_ordering(815) 00:12:59.131 fused_ordering(816) 00:12:59.131 fused_ordering(817) 00:12:59.131 fused_ordering(818) 00:12:59.131 fused_ordering(819) 00:12:59.131 fused_ordering(820) 00:12:59.391 fused_ordering(821) 00:12:59.391 fused_ordering(822) 00:12:59.391 fused_ordering(823) 00:12:59.391 fused_ordering(824) 00:12:59.391 fused_ordering(825) 00:12:59.391 fused_ordering(826) 00:12:59.391 fused_ordering(827) 00:12:59.391 fused_ordering(828) 00:12:59.391 fused_ordering(829) 00:12:59.391 fused_ordering(830) 00:12:59.391 fused_ordering(831) 00:12:59.391 fused_ordering(832) 00:12:59.391 fused_ordering(833) 00:12:59.391 fused_ordering(834) 00:12:59.391 fused_ordering(835) 00:12:59.391 fused_ordering(836) 00:12:59.391 fused_ordering(837) 00:12:59.391 fused_ordering(838) 00:12:59.391 fused_ordering(839) 00:12:59.391 fused_ordering(840) 00:12:59.391 fused_ordering(841) 00:12:59.391 fused_ordering(842) 00:12:59.391 fused_ordering(843) 00:12:59.391 fused_ordering(844) 00:12:59.391 fused_ordering(845) 00:12:59.391 fused_ordering(846) 00:12:59.391 fused_ordering(847) 00:12:59.391 fused_ordering(848) 00:12:59.391 fused_ordering(849) 00:12:59.391 fused_ordering(850) 00:12:59.391 fused_ordering(851) 00:12:59.391 fused_ordering(852) 00:12:59.391 fused_ordering(853) 00:12:59.391 fused_ordering(854) 00:12:59.391 fused_ordering(855) 00:12:59.391 fused_ordering(856) 00:12:59.391 fused_ordering(857) 00:12:59.391 fused_ordering(858) 00:12:59.391 fused_ordering(859) 00:12:59.391 fused_ordering(860) 00:12:59.391 fused_ordering(861) 00:12:59.391 fused_ordering(862) 00:12:59.391 fused_ordering(863) 00:12:59.391 fused_ordering(864) 00:12:59.391 fused_ordering(865) 00:12:59.391 fused_ordering(866) 00:12:59.391 fused_ordering(867) 00:12:59.391 fused_ordering(868) 00:12:59.391 fused_ordering(869) 00:12:59.391 fused_ordering(870) 00:12:59.391 fused_ordering(871) 00:12:59.391 fused_ordering(872) 00:12:59.391 fused_ordering(873) 00:12:59.391 fused_ordering(874) 00:12:59.391 fused_ordering(875) 00:12:59.391 fused_ordering(876) 00:12:59.391 fused_ordering(877) 00:12:59.391 fused_ordering(878) 00:12:59.391 fused_ordering(879) 00:12:59.391 fused_ordering(880) 00:12:59.391 fused_ordering(881) 00:12:59.391 fused_ordering(882) 00:12:59.391 fused_ordering(883) 00:12:59.391 fused_ordering(884) 00:12:59.391 fused_ordering(885) 00:12:59.391 fused_ordering(886) 00:12:59.391 fused_ordering(887) 00:12:59.391 fused_ordering(888) 00:12:59.391 fused_ordering(889) 00:12:59.391 fused_ordering(890) 00:12:59.391 fused_ordering(891) 00:12:59.391 fused_ordering(892) 00:12:59.391 fused_ordering(893) 00:12:59.391 fused_ordering(894) 00:12:59.391 fused_ordering(895) 00:12:59.391 fused_ordering(896) 00:12:59.391 fused_ordering(897) 00:12:59.391 fused_ordering(898) 00:12:59.391 fused_ordering(899) 00:12:59.391 fused_ordering(900) 00:12:59.391 fused_ordering(901) 00:12:59.391 fused_ordering(902) 00:12:59.391 fused_ordering(903) 00:12:59.391 fused_ordering(904) 00:12:59.391 fused_ordering(905) 00:12:59.391 fused_ordering(906) 00:12:59.391 fused_ordering(907) 00:12:59.391 fused_ordering(908) 00:12:59.391 fused_ordering(909) 00:12:59.391 fused_ordering(910) 00:12:59.391 fused_ordering(911) 00:12:59.391 fused_ordering(912) 00:12:59.391 fused_ordering(913) 00:12:59.391 fused_ordering(914) 00:12:59.391 fused_ordering(915) 00:12:59.391 fused_ordering(916) 00:12:59.391 fused_ordering(917) 00:12:59.391 fused_ordering(918) 00:12:59.391 fused_ordering(919) 00:12:59.391 fused_ordering(920) 00:12:59.391 fused_ordering(921) 00:12:59.391 fused_ordering(922) 00:12:59.391 fused_ordering(923) 00:12:59.391 fused_ordering(924) 00:12:59.391 fused_ordering(925) 00:12:59.391 fused_ordering(926) 00:12:59.391 fused_ordering(927) 00:12:59.391 fused_ordering(928) 00:12:59.391 fused_ordering(929) 00:12:59.391 fused_ordering(930) 00:12:59.391 fused_ordering(931) 00:12:59.391 fused_ordering(932) 00:12:59.391 fused_ordering(933) 00:12:59.391 fused_ordering(934) 00:12:59.391 fused_ordering(935) 00:12:59.391 fused_ordering(936) 00:12:59.391 fused_ordering(937) 00:12:59.391 fused_ordering(938) 00:12:59.391 fused_ordering(939) 00:12:59.391 fused_ordering(940) 00:12:59.391 fused_ordering(941) 00:12:59.391 fused_ordering(942) 00:12:59.391 fused_ordering(943) 00:12:59.391 fused_ordering(944) 00:12:59.391 fused_ordering(945) 00:12:59.391 fused_ordering(946) 00:12:59.391 fused_ordering(947) 00:12:59.391 fused_ordering(948) 00:12:59.391 fused_ordering(949) 00:12:59.391 fused_ordering(950) 00:12:59.391 fused_ordering(951) 00:12:59.391 fused_ordering(952) 00:12:59.391 fused_ordering(953) 00:12:59.391 fused_ordering(954) 00:12:59.391 fused_ordering(955) 00:12:59.391 fused_ordering(956) 00:12:59.391 fused_ordering(957) 00:12:59.391 fused_ordering(958) 00:12:59.391 fused_ordering(959) 00:12:59.391 fused_ordering(960) 00:12:59.391 fused_ordering(961) 00:12:59.391 fused_ordering(962) 00:12:59.391 fused_ordering(963) 00:12:59.391 fused_ordering(964) 00:12:59.391 fused_ordering(965) 00:12:59.391 fused_ordering(966) 00:12:59.391 fused_ordering(967) 00:12:59.391 fused_ordering(968) 00:12:59.391 fused_ordering(969) 00:12:59.391 fused_ordering(970) 00:12:59.391 fused_ordering(971) 00:12:59.391 fused_ordering(972) 00:12:59.391 fused_ordering(973) 00:12:59.391 fused_ordering(974) 00:12:59.391 fused_ordering(975) 00:12:59.391 fused_ordering(976) 00:12:59.391 fused_ordering(977) 00:12:59.391 fused_ordering(978) 00:12:59.391 fused_ordering(979) 00:12:59.391 fused_ordering(980) 00:12:59.391 fused_ordering(981) 00:12:59.391 fused_ordering(982) 00:12:59.391 fused_ordering(983) 00:12:59.391 fused_ordering(984) 00:12:59.391 fused_ordering(985) 00:12:59.391 fused_ordering(986) 00:12:59.391 fused_ordering(987) 00:12:59.391 fused_ordering(988) 00:12:59.391 fused_ordering(989) 00:12:59.391 fused_ordering(990) 00:12:59.391 fused_ordering(991) 00:12:59.391 fused_ordering(992) 00:12:59.391 fused_ordering(993) 00:12:59.391 fused_ordering(994) 00:12:59.391 fused_ordering(995) 00:12:59.391 fused_ordering(996) 00:12:59.391 fused_ordering(997) 00:12:59.391 fused_ordering(998) 00:12:59.391 fused_ordering(999) 00:12:59.391 fused_ordering(1000) 00:12:59.391 fused_ordering(1001) 00:12:59.391 fused_ordering(1002) 00:12:59.391 fused_ordering(1003) 00:12:59.391 fused_ordering(1004) 00:12:59.391 fused_ordering(1005) 00:12:59.391 fused_ordering(1006) 00:12:59.391 fused_ordering(1007) 00:12:59.391 fused_ordering(1008) 00:12:59.391 fused_ordering(1009) 00:12:59.391 fused_ordering(1010) 00:12:59.391 fused_ordering(1011) 00:12:59.391 fused_ordering(1012) 00:12:59.391 fused_ordering(1013) 00:12:59.391 fused_ordering(1014) 00:12:59.391 fused_ordering(1015) 00:12:59.391 fused_ordering(1016) 00:12:59.391 fused_ordering(1017) 00:12:59.391 fused_ordering(1018) 00:12:59.391 fused_ordering(1019) 00:12:59.391 fused_ordering(1020) 00:12:59.391 fused_ordering(1021) 00:12:59.391 fused_ordering(1022) 00:12:59.391 fused_ordering(1023) 00:12:59.391 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:59.391 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:59.391 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:59.391 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:59.391 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:59.391 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:59.392 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.392 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:59.392 rmmod nvme_tcp 00:12:59.392 rmmod nvme_fabrics 00:12:59.392 rmmod nvme_keyring 00:12:59.392 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 575797 ']' 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 575797 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 575797 ']' 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 575797 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 575797 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 575797' 00:12:59.651 killing process with pid 575797 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 575797 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 575797 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.651 04:49:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:02.186 00:13:02.186 real 0m10.641s 00:13:02.186 user 0m5.001s 00:13:02.186 sys 0m5.821s 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:02.186 ************************************ 00:13:02.186 END TEST nvmf_fused_ordering 00:13:02.186 ************************************ 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.186 ************************************ 00:13:02.186 START TEST nvmf_ns_masking 00:13:02.186 ************************************ 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:02.186 * Looking for test storage... 00:13:02.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:02.186 04:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:02.186 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:02.186 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.186 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.186 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.186 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.186 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:02.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.187 --rc genhtml_branch_coverage=1 00:13:02.187 --rc genhtml_function_coverage=1 00:13:02.187 --rc genhtml_legend=1 00:13:02.187 --rc geninfo_all_blocks=1 00:13:02.187 --rc geninfo_unexecuted_blocks=1 00:13:02.187 00:13:02.187 ' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:02.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.187 --rc genhtml_branch_coverage=1 00:13:02.187 --rc genhtml_function_coverage=1 00:13:02.187 --rc genhtml_legend=1 00:13:02.187 --rc geninfo_all_blocks=1 00:13:02.187 --rc geninfo_unexecuted_blocks=1 00:13:02.187 00:13:02.187 ' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:02.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.187 --rc genhtml_branch_coverage=1 00:13:02.187 --rc genhtml_function_coverage=1 00:13:02.187 --rc genhtml_legend=1 00:13:02.187 --rc geninfo_all_blocks=1 00:13:02.187 --rc geninfo_unexecuted_blocks=1 00:13:02.187 00:13:02.187 ' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:02.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.187 --rc genhtml_branch_coverage=1 00:13:02.187 --rc genhtml_function_coverage=1 00:13:02.187 --rc genhtml_legend=1 00:13:02.187 --rc geninfo_all_blocks=1 00:13:02.187 --rc geninfo_unexecuted_blocks=1 00:13:02.187 00:13:02.187 ' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.187 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8bab3ba8-3dc9-43ba-befa-a43ca83479b6 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fe326489-e390-49e5-812e-bb2110c4274d 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=37e6b22b-39ad-43f7-9eab-524d2fa7d552 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:02.188 04:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:08.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:08.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:08.893 Found net devices under 0000:af:00.0: cvl_0_0 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:08.893 Found net devices under 0000:af:00.1: cvl_0_1 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.893 04:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.893 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.893 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:13:08.894 00:13:08.894 --- 10.0.0.2 ping statistics --- 00:13:08.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.894 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:13:08.894 00:13:08.894 --- 10.0.0.1 ping statistics --- 00:13:08.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.894 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=579736 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 579736 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 579736 ']' 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.894 [2024-12-10 04:49:59.114904] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:08.894 [2024-12-10 04:49:59.114947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.894 [2024-12-10 04:49:59.194538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.894 [2024-12-10 04:49:59.233867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.894 [2024-12-10 04:49:59.233900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.894 [2024-12-10 04:49:59.233906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.894 [2024-12-10 04:49:59.233912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.894 [2024-12-10 04:49:59.233917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.894 [2024-12-10 04:49:59.234423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.894 04:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:09.153 [2024-12-10 04:50:00.167738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.153 04:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:09.153 04:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:09.153 04:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:09.412 Malloc1 00:13:09.412 04:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:09.670 Malloc2 00:13:09.670 04:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:09.929 04:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:09.929 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.188 [2024-12-10 04:50:01.195621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.188 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:10.188 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37e6b22b-39ad-43f7-9eab-524d2fa7d552 -a 10.0.0.2 -s 4420 -i 4 00:13:10.447 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.447 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:10.447 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.447 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:10.447 04:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:12.351 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.610 [ 0]:0x1 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5a67582eeae4d8b953f77ca7afa94cb 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5a67582eeae4d8b953f77ca7afa94cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.610 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.869 [ 0]:0x1 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5a67582eeae4d8b953f77ca7afa94cb 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5a67582eeae4d8b953f77ca7afa94cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.869 [ 1]:0x2 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:12.869 04:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.128 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.128 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:13.387 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:13.387 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37e6b22b-39ad-43f7-9eab-524d2fa7d552 -a 10.0.0.2 -s 4420 -i 4 00:13:13.646 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:13.646 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:13.646 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.646 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:13.646 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:13.646 04:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:15.549 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.807 [ 0]:0x2 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.807 04:50:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.066 [ 0]:0x1 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5a67582eeae4d8b953f77ca7afa94cb 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5a67582eeae4d8b953f77ca7afa94cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.066 [ 1]:0x2 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.066 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.325 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.584 [ 0]:0x2 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.584 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37e6b22b-39ad-43f7-9eab-524d2fa7d552 -a 10.0.0.2 -s 4420 -i 4 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:16.843 04:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.378 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.379 [ 0]:0x1 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5a67582eeae4d8b953f77ca7afa94cb 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5a67582eeae4d8b953f77ca7afa94cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:19.379 04:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.379 [ 1]:0x2 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.379 [ 0]:0x2 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:19.379 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:19.639 [2024-12-10 04:50:10.546099] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:19.639 request: 00:13:19.639 { 00:13:19.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.639 "nsid": 2, 00:13:19.639 "host": "nqn.2016-06.io.spdk:host1", 00:13:19.639 "method": "nvmf_ns_remove_host", 00:13:19.639 "req_id": 1 00:13:19.639 } 00:13:19.639 Got JSON-RPC error response 00:13:19.639 response: 00:13:19.639 { 00:13:19.639 "code": -32602, 00:13:19.639 "message": "Invalid parameters" 00:13:19.639 } 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.639 [ 0]:0x2 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a51efcf3ac26468c967daac89662ea75 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a51efcf3ac26468c967daac89662ea75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:19.639 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=581693 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 581693 /var/tmp/host.sock 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 581693 ']' 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:19.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.899 04:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:19.899 [2024-12-10 04:50:10.918101] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:19.899 [2024-12-10 04:50:10.918145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581693 ] 00:13:19.899 [2024-12-10 04:50:10.992318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.899 [2024-12-10 04:50:11.031543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.157 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.157 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:20.157 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.417 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.675 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8bab3ba8-3dc9-43ba-befa-a43ca83479b6 00:13:20.675 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:20.675 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8BAB3BA83DC943BABEFAA43CA83479B6 -i 00:13:20.934 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fe326489-e390-49e5-812e-bb2110c4274d 00:13:20.934 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:20.934 04:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FE326489E39049E5812EBB2110C4274D -i 00:13:20.934 04:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:21.193 04:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:21.452 04:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:21.452 04:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:21.710 nvme0n1 00:13:21.710 04:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:21.710 04:50:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:22.281 nvme1n2 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:22.281 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:22.542 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8bab3ba8-3dc9-43ba-befa-a43ca83479b6 == \8\b\a\b\3\b\a\8\-\3\d\c\9\-\4\3\b\a\-\b\e\f\a\-\a\4\3\c\a\8\3\4\7\9\b\6 ]] 00:13:22.542 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:22.542 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:22.542 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:22.801 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fe326489-e390-49e5-812e-bb2110c4274d == \f\e\3\2\6\4\8\9\-\e\3\9\0\-\4\9\e\5\-\8\1\2\e\-\b\b\2\1\1\0\c\4\2\7\4\d ]] 00:13:22.801 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.801 04:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8bab3ba8-3dc9-43ba-befa-a43ca83479b6 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8BAB3BA83DC943BABEFAA43CA83479B6 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8BAB3BA83DC943BABEFAA43CA83479B6 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:23.060 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8BAB3BA83DC943BABEFAA43CA83479B6 00:13:23.319 [2024-12-10 04:50:14.292542] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:23.319 [2024-12-10 04:50:14.292574] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:23.319 [2024-12-10 04:50:14.292582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.319 request: 00:13:23.319 { 00:13:23.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.319 "namespace": { 00:13:23.319 "bdev_name": "invalid", 00:13:23.319 "nsid": 1, 00:13:23.319 "nguid": "8BAB3BA83DC943BABEFAA43CA83479B6", 00:13:23.319 "no_auto_visible": false, 00:13:23.319 "hide_metadata": false 00:13:23.319 }, 00:13:23.319 "method": "nvmf_subsystem_add_ns", 00:13:23.319 "req_id": 1 00:13:23.319 } 00:13:23.319 Got JSON-RPC error response 00:13:23.319 response: 00:13:23.319 { 00:13:23.319 "code": -32602, 00:13:23.319 "message": "Invalid parameters" 00:13:23.319 } 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8bab3ba8-3dc9-43ba-befa-a43ca83479b6 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:23.319 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8BAB3BA83DC943BABEFAA43CA83479B6 -i 00:13:23.578 04:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:25.483 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:25.483 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:25.483 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 581693 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 581693 ']' 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 581693 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 581693 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 581693' 00:13:25.742 killing process with pid 581693 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 581693 00:13:25.742 04:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 581693 00:13:26.001 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.260 rmmod nvme_tcp 00:13:26.260 rmmod nvme_fabrics 00:13:26.260 rmmod nvme_keyring 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 579736 ']' 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 579736 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 579736 ']' 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 579736 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579736 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579736' 00:13:26.260 killing process with pid 579736 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 579736 00:13:26.260 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 579736 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.519 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.520 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.520 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.520 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.520 04:50:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.057 00:13:29.057 real 0m26.775s 00:13:29.057 user 0m31.974s 00:13:29.057 sys 0m7.110s 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.057 ************************************ 00:13:29.057 END TEST nvmf_ns_masking 00:13:29.057 ************************************ 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.057 ************************************ 00:13:29.057 START TEST nvmf_nvme_cli 00:13:29.057 ************************************ 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:29.057 * Looking for test storage... 00:13:29.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:29.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.057 --rc genhtml_branch_coverage=1 00:13:29.057 --rc genhtml_function_coverage=1 00:13:29.057 --rc genhtml_legend=1 00:13:29.057 --rc geninfo_all_blocks=1 00:13:29.057 --rc geninfo_unexecuted_blocks=1 00:13:29.057 00:13:29.057 ' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:29.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.057 --rc genhtml_branch_coverage=1 00:13:29.057 --rc genhtml_function_coverage=1 00:13:29.057 --rc genhtml_legend=1 00:13:29.057 --rc geninfo_all_blocks=1 00:13:29.057 --rc geninfo_unexecuted_blocks=1 00:13:29.057 00:13:29.057 ' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:29.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.057 --rc genhtml_branch_coverage=1 00:13:29.057 --rc genhtml_function_coverage=1 00:13:29.057 --rc genhtml_legend=1 00:13:29.057 --rc geninfo_all_blocks=1 00:13:29.057 --rc geninfo_unexecuted_blocks=1 00:13:29.057 00:13:29.057 ' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:29.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.057 --rc genhtml_branch_coverage=1 00:13:29.057 --rc genhtml_function_coverage=1 00:13:29.057 --rc genhtml_legend=1 00:13:29.057 --rc geninfo_all_blocks=1 00:13:29.057 --rc geninfo_unexecuted_blocks=1 00:13:29.057 00:13:29.057 ' 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:29.057 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.058 04:50:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:35.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:35.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:35.630 Found net devices under 0000:af:00.0: cvl_0_0 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.630 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:35.631 Found net devices under 0000:af:00.1: cvl_0_1 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:13:35.631 00:13:35.631 --- 10.0.0.2 ping statistics --- 00:13:35.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.631 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:35.631 00:13:35.631 --- 10.0.0.1 ping statistics --- 00:13:35.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.631 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=586313 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 586313 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 586313 ']' 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.631 04:50:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 [2024-12-10 04:50:25.921499] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:35.631 [2024-12-10 04:50:25.921543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.631 [2024-12-10 04:50:25.998326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.631 [2024-12-10 04:50:26.039838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.631 [2024-12-10 04:50:26.039874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.631 [2024-12-10 04:50:26.039881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.631 [2024-12-10 04:50:26.039887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.631 [2024-12-10 04:50:26.039892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.631 [2024-12-10 04:50:26.041349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.631 [2024-12-10 04:50:26.041457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.631 [2024-12-10 04:50:26.041566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.631 [2024-12-10 04:50:26.041567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 [2024-12-10 04:50:26.178805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 Malloc0 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 Malloc1 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.631 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.632 [2024-12-10 04:50:26.272519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:35.632 00:13:35.632 Discovery Log Number of Records 2, Generation counter 2 00:13:35.632 =====Discovery Log Entry 0====== 00:13:35.632 trtype: tcp 00:13:35.632 adrfam: ipv4 00:13:35.632 subtype: current discovery subsystem 00:13:35.632 treq: not required 00:13:35.632 portid: 0 00:13:35.632 trsvcid: 4420 00:13:35.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:35.632 traddr: 10.0.0.2 00:13:35.632 eflags: explicit discovery connections, duplicate discovery information 00:13:35.632 sectype: none 00:13:35.632 =====Discovery Log Entry 1====== 00:13:35.632 trtype: tcp 00:13:35.632 adrfam: ipv4 00:13:35.632 subtype: nvme subsystem 00:13:35.632 treq: not required 00:13:35.632 portid: 0 00:13:35.632 trsvcid: 4420 00:13:35.632 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:35.632 traddr: 10.0.0.2 00:13:35.632 eflags: none 00:13:35.632 sectype: none 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:35.632 04:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.569 04:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:36.569 04:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.569 04:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.569 04:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:36.569 04:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:36.569 04:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:38.474 /dev/nvme0n2 ]] 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:38.474 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:38.475 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.734 rmmod nvme_tcp 00:13:38.734 rmmod nvme_fabrics 00:13:38.734 rmmod nvme_keyring 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 586313 ']' 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 586313 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 586313 ']' 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 586313 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 586313 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 586313' 00:13:38.734 killing process with pid 586313 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 586313 00:13:38.734 04:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 586313 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.993 04:50:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:41.530 00:13:41.530 real 0m12.381s 00:13:41.530 user 0m17.609s 00:13:41.530 sys 0m5.049s 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.530 ************************************ 00:13:41.530 END TEST nvmf_nvme_cli 00:13:41.530 ************************************ 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:41.530 ************************************ 00:13:41.530 START TEST nvmf_vfio_user 00:13:41.530 ************************************ 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:41.530 * Looking for test storage... 00:13:41.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:41.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.530 --rc genhtml_branch_coverage=1 00:13:41.530 --rc genhtml_function_coverage=1 00:13:41.530 --rc genhtml_legend=1 00:13:41.530 --rc geninfo_all_blocks=1 00:13:41.530 --rc geninfo_unexecuted_blocks=1 00:13:41.530 00:13:41.530 ' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:41.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.530 --rc genhtml_branch_coverage=1 00:13:41.530 --rc genhtml_function_coverage=1 00:13:41.530 --rc genhtml_legend=1 00:13:41.530 --rc geninfo_all_blocks=1 00:13:41.530 --rc geninfo_unexecuted_blocks=1 00:13:41.530 00:13:41.530 ' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:41.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.530 --rc genhtml_branch_coverage=1 00:13:41.530 --rc genhtml_function_coverage=1 00:13:41.530 --rc genhtml_legend=1 00:13:41.530 --rc geninfo_all_blocks=1 00:13:41.530 --rc geninfo_unexecuted_blocks=1 00:13:41.530 00:13:41.530 ' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:41.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.530 --rc genhtml_branch_coverage=1 00:13:41.530 --rc genhtml_function_coverage=1 00:13:41.530 --rc genhtml_legend=1 00:13:41.530 --rc geninfo_all_blocks=1 00:13:41.530 --rc geninfo_unexecuted_blocks=1 00:13:41.530 00:13:41.530 ' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.530 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=587564 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 587564' 00:13:41.531 Process pid: 587564 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 587564 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 587564 ']' 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.531 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:41.531 [2024-12-10 04:50:32.449679] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:41.531 [2024-12-10 04:50:32.449728] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.531 [2024-12-10 04:50:32.522410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.531 [2024-12-10 04:50:32.560824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.531 [2024-12-10 04:50:32.560862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.531 [2024-12-10 04:50:32.560868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.531 [2024-12-10 04:50:32.560875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.531 [2024-12-10 04:50:32.560881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.531 [2024-12-10 04:50:32.562287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.531 [2024-12-10 04:50:32.562397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.531 [2024-12-10 04:50:32.562480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.531 [2024-12-10 04:50:32.562481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.790 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.790 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:41.790 04:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:42.727 04:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:42.986 04:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:42.986 04:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:42.986 04:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.986 04:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:42.986 04:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:42.986 Malloc1 00:13:42.986 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:43.245 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:43.503 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:43.762 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.762 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:43.762 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:43.762 Malloc2 00:13:44.030 04:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:44.030 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:44.289 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:44.549 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:44.549 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:44.549 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.549 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.549 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.549 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:44.549 [2024-12-10 04:50:35.527799] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:44.549 [2024-12-10 04:50:35.527835] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588042 ] 00:13:44.549 [2024-12-10 04:50:35.566644] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:44.549 [2024-12-10 04:50:35.571931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.549 [2024-12-10 04:50:35.571953] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9d7d3b1000 00:13:44.549 [2024-12-10 04:50:35.572929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.573930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.574944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.575937] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.576941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.577952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.578958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.579965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.549 [2024-12-10 04:50:35.580972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.549 [2024-12-10 04:50:35.580981] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9d7d3a6000 00:13:44.549 [2024-12-10 04:50:35.581895] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.549 [2024-12-10 04:50:35.591386] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:44.549 [2024-12-10 04:50:35.591409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:44.549 [2024-12-10 04:50:35.597069] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.549 [2024-12-10 04:50:35.597105] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:44.549 [2024-12-10 04:50:35.597176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:44.549 [2024-12-10 04:50:35.597191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:44.549 [2024-12-10 04:50:35.597196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:44.549 [2024-12-10 04:50:35.598061] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:44.549 [2024-12-10 04:50:35.598070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:44.550 [2024-12-10 04:50:35.598077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:44.550 [2024-12-10 04:50:35.599066] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.550 [2024-12-10 04:50:35.599076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:44.550 [2024-12-10 04:50:35.599083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.550 [2024-12-10 04:50:35.600065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:44.550 [2024-12-10 04:50:35.600073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.550 [2024-12-10 04:50:35.601070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:44.550 [2024-12-10 04:50:35.601077] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:44.550 [2024-12-10 04:50:35.601082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:44.550 [2024-12-10 04:50:35.601088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.550 [2024-12-10 04:50:35.601195] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:44.550 [2024-12-10 04:50:35.601199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.550 [2024-12-10 04:50:35.601204] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:44.550 [2024-12-10 04:50:35.602083] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:44.550 [2024-12-10 04:50:35.603084] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:44.550 [2024-12-10 04:50:35.604091] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.550 [2024-12-10 04:50:35.605092] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.550 [2024-12-10 04:50:35.605155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.550 [2024-12-10 04:50:35.606102] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:44.550 [2024-12-10 04:50:35.606109] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.550 [2024-12-10 04:50:35.606113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:44.550 [2024-12-10 04:50:35.606136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606153] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.550 [2024-12-10 04:50:35.606158] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.550 [2024-12-10 04:50:35.606161] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.550 [2024-12-10 04:50:35.606179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606231] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:44.550 [2024-12-10 04:50:35.606236] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:44.550 [2024-12-10 04:50:35.606239] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:44.550 [2024-12-10 04:50:35.606244] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:44.550 [2024-12-10 04:50:35.606248] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:44.550 [2024-12-10 04:50:35.606252] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:44.550 [2024-12-10 04:50:35.606256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.550 [2024-12-10 04:50:35.606302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.550 [2024-12-10 04:50:35.606309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.550 [2024-12-10 04:50:35.606316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.550 [2024-12-10 04:50:35.606321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606353] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:44.550 [2024-12-10 04:50:35.606357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606447] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:44.550 [2024-12-10 04:50:35.606451] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:44.550 [2024-12-10 04:50:35.606454] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.550 [2024-12-10 04:50:35.606460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606480] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:44.550 [2024-12-10 04:50:35.606491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606504] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.550 [2024-12-10 04:50:35.606507] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.550 [2024-12-10 04:50:35.606510] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.550 [2024-12-10 04:50:35.606516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606560] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.550 [2024-12-10 04:50:35.606564] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.550 [2024-12-10 04:50:35.606567] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.550 [2024-12-10 04:50:35.606573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.550 [2024-12-10 04:50:35.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:44.550 [2024-12-10 04:50:35.606589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.550 [2024-12-10 04:50:35.606618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:44.551 [2024-12-10 04:50:35.606622] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.551 [2024-12-10 04:50:35.606626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:44.551 [2024-12-10 04:50:35.606631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:44.551 [2024-12-10 04:50:35.606646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606723] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:44.551 [2024-12-10 04:50:35.606728] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:44.551 [2024-12-10 04:50:35.606731] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:44.551 [2024-12-10 04:50:35.606734] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:44.551 [2024-12-10 04:50:35.606737] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:44.551 [2024-12-10 04:50:35.606742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:44.551 [2024-12-10 04:50:35.606748] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:44.551 [2024-12-10 04:50:35.606752] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:44.551 [2024-12-10 04:50:35.606755] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.551 [2024-12-10 04:50:35.606761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606767] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:44.551 [2024-12-10 04:50:35.606770] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.551 [2024-12-10 04:50:35.606773] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.551 [2024-12-10 04:50:35.606779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606787] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:44.551 [2024-12-10 04:50:35.606790] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:44.551 [2024-12-10 04:50:35.606793] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.551 [2024-12-10 04:50:35.606799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:44.551 [2024-12-10 04:50:35.606805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:44.551 [2024-12-10 04:50:35.606831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:44.551 ===================================================== 00:13:44.551 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.551 ===================================================== 00:13:44.551 Controller Capabilities/Features 00:13:44.551 ================================ 00:13:44.551 Vendor ID: 4e58 00:13:44.551 Subsystem Vendor ID: 4e58 00:13:44.551 Serial Number: SPDK1 00:13:44.551 Model Number: SPDK bdev Controller 00:13:44.551 Firmware Version: 25.01 00:13:44.551 Recommended Arb Burst: 6 00:13:44.551 IEEE OUI Identifier: 8d 6b 50 00:13:44.551 Multi-path I/O 00:13:44.551 May have multiple subsystem ports: Yes 00:13:44.551 May have multiple controllers: Yes 00:13:44.551 Associated with SR-IOV VF: No 00:13:44.551 Max Data Transfer Size: 131072 00:13:44.551 Max Number of Namespaces: 32 00:13:44.551 Max Number of I/O Queues: 127 00:13:44.551 NVMe Specification Version (VS): 1.3 00:13:44.551 NVMe Specification Version (Identify): 1.3 00:13:44.551 Maximum Queue Entries: 256 00:13:44.551 Contiguous Queues Required: Yes 00:13:44.551 Arbitration Mechanisms Supported 00:13:44.551 Weighted Round Robin: Not Supported 00:13:44.551 Vendor Specific: Not Supported 00:13:44.551 Reset Timeout: 15000 ms 00:13:44.551 Doorbell Stride: 4 bytes 00:13:44.551 NVM Subsystem Reset: Not Supported 00:13:44.551 Command Sets Supported 00:13:44.551 NVM Command Set: Supported 00:13:44.551 Boot Partition: Not Supported 00:13:44.551 Memory Page Size Minimum: 4096 bytes 00:13:44.551 Memory Page Size Maximum: 4096 bytes 00:13:44.551 Persistent Memory Region: Not Supported 00:13:44.551 Optional Asynchronous Events Supported 00:13:44.551 Namespace Attribute Notices: Supported 00:13:44.551 Firmware Activation Notices: Not Supported 00:13:44.551 ANA Change Notices: Not Supported 00:13:44.551 PLE Aggregate Log Change Notices: Not Supported 00:13:44.551 LBA Status Info Alert Notices: Not Supported 00:13:44.551 EGE Aggregate Log Change Notices: Not Supported 00:13:44.551 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.551 Zone Descriptor Change Notices: Not Supported 00:13:44.551 Discovery Log Change Notices: Not Supported 00:13:44.551 Controller Attributes 00:13:44.551 128-bit Host Identifier: Supported 00:13:44.551 Non-Operational Permissive Mode: Not Supported 00:13:44.551 NVM Sets: Not Supported 00:13:44.551 Read Recovery Levels: Not Supported 00:13:44.551 Endurance Groups: Not Supported 00:13:44.551 Predictable Latency Mode: Not Supported 00:13:44.551 Traffic Based Keep ALive: Not Supported 00:13:44.551 Namespace Granularity: Not Supported 00:13:44.551 SQ Associations: Not Supported 00:13:44.551 UUID List: Not Supported 00:13:44.551 Multi-Domain Subsystem: Not Supported 00:13:44.551 Fixed Capacity Management: Not Supported 00:13:44.551 Variable Capacity Management: Not Supported 00:13:44.551 Delete Endurance Group: Not Supported 00:13:44.551 Delete NVM Set: Not Supported 00:13:44.551 Extended LBA Formats Supported: Not Supported 00:13:44.551 Flexible Data Placement Supported: Not Supported 00:13:44.551 00:13:44.551 Controller Memory Buffer Support 00:13:44.551 ================================ 00:13:44.551 Supported: No 00:13:44.551 00:13:44.551 Persistent Memory Region Support 00:13:44.551 ================================ 00:13:44.551 Supported: No 00:13:44.551 00:13:44.551 Admin Command Set Attributes 00:13:44.551 ============================ 00:13:44.551 Security Send/Receive: Not Supported 00:13:44.551 Format NVM: Not Supported 00:13:44.551 Firmware Activate/Download: Not Supported 00:13:44.551 Namespace Management: Not Supported 00:13:44.551 Device Self-Test: Not Supported 00:13:44.551 Directives: Not Supported 00:13:44.551 NVMe-MI: Not Supported 00:13:44.551 Virtualization Management: Not Supported 00:13:44.551 Doorbell Buffer Config: Not Supported 00:13:44.551 Get LBA Status Capability: Not Supported 00:13:44.551 Command & Feature Lockdown Capability: Not Supported 00:13:44.551 Abort Command Limit: 4 00:13:44.551 Async Event Request Limit: 4 00:13:44.551 Number of Firmware Slots: N/A 00:13:44.551 Firmware Slot 1 Read-Only: N/A 00:13:44.551 Firmware Activation Without Reset: N/A 00:13:44.551 Multiple Update Detection Support: N/A 00:13:44.551 Firmware Update Granularity: No Information Provided 00:13:44.551 Per-Namespace SMART Log: No 00:13:44.551 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.551 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:44.551 Command Effects Log Page: Supported 00:13:44.551 Get Log Page Extended Data: Supported 00:13:44.551 Telemetry Log Pages: Not Supported 00:13:44.551 Persistent Event Log Pages: Not Supported 00:13:44.551 Supported Log Pages Log Page: May Support 00:13:44.551 Commands Supported & Effects Log Page: Not Supported 00:13:44.551 Feature Identifiers & Effects Log Page:May Support 00:13:44.551 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.551 Data Area 4 for Telemetry Log: Not Supported 00:13:44.551 Error Log Page Entries Supported: 128 00:13:44.551 Keep Alive: Supported 00:13:44.551 Keep Alive Granularity: 10000 ms 00:13:44.551 00:13:44.551 NVM Command Set Attributes 00:13:44.551 ========================== 00:13:44.551 Submission Queue Entry Size 00:13:44.551 Max: 64 00:13:44.551 Min: 64 00:13:44.551 Completion Queue Entry Size 00:13:44.551 Max: 16 00:13:44.551 Min: 16 00:13:44.551 Number of Namespaces: 32 00:13:44.551 Compare Command: Supported 00:13:44.551 Write Uncorrectable Command: Not Supported 00:13:44.551 Dataset Management Command: Supported 00:13:44.551 Write Zeroes Command: Supported 00:13:44.552 Set Features Save Field: Not Supported 00:13:44.552 Reservations: Not Supported 00:13:44.552 Timestamp: Not Supported 00:13:44.552 Copy: Supported 00:13:44.552 Volatile Write Cache: Present 00:13:44.552 Atomic Write Unit (Normal): 1 00:13:44.552 Atomic Write Unit (PFail): 1 00:13:44.552 Atomic Compare & Write Unit: 1 00:13:44.552 Fused Compare & Write: Supported 00:13:44.552 Scatter-Gather List 00:13:44.552 SGL Command Set: Supported (Dword aligned) 00:13:44.552 SGL Keyed: Not Supported 00:13:44.552 SGL Bit Bucket Descriptor: Not Supported 00:13:44.552 SGL Metadata Pointer: Not Supported 00:13:44.552 Oversized SGL: Not Supported 00:13:44.552 SGL Metadata Address: Not Supported 00:13:44.552 SGL Offset: Not Supported 00:13:44.552 Transport SGL Data Block: Not Supported 00:13:44.552 Replay Protected Memory Block: Not Supported 00:13:44.552 00:13:44.552 Firmware Slot Information 00:13:44.552 ========================= 00:13:44.552 Active slot: 1 00:13:44.552 Slot 1 Firmware Revision: 25.01 00:13:44.552 00:13:44.552 00:13:44.552 Commands Supported and Effects 00:13:44.552 ============================== 00:13:44.552 Admin Commands 00:13:44.552 -------------- 00:13:44.552 Get Log Page (02h): Supported 00:13:44.552 Identify (06h): Supported 00:13:44.552 Abort (08h): Supported 00:13:44.552 Set Features (09h): Supported 00:13:44.552 Get Features (0Ah): Supported 00:13:44.552 Asynchronous Event Request (0Ch): Supported 00:13:44.552 Keep Alive (18h): Supported 00:13:44.552 I/O Commands 00:13:44.552 ------------ 00:13:44.552 Flush (00h): Supported LBA-Change 00:13:44.552 Write (01h): Supported LBA-Change 00:13:44.552 Read (02h): Supported 00:13:44.552 Compare (05h): Supported 00:13:44.552 Write Zeroes (08h): Supported LBA-Change 00:13:44.552 Dataset Management (09h): Supported LBA-Change 00:13:44.552 Copy (19h): Supported LBA-Change 00:13:44.552 00:13:44.552 Error Log 00:13:44.552 ========= 00:13:44.552 00:13:44.552 Arbitration 00:13:44.552 =========== 00:13:44.552 Arbitration Burst: 1 00:13:44.552 00:13:44.552 Power Management 00:13:44.552 ================ 00:13:44.552 Number of Power States: 1 00:13:44.552 Current Power State: Power State #0 00:13:44.552 Power State #0: 00:13:44.552 Max Power: 0.00 W 00:13:44.552 Non-Operational State: Operational 00:13:44.552 Entry Latency: Not Reported 00:13:44.552 Exit Latency: Not Reported 00:13:44.552 Relative Read Throughput: 0 00:13:44.552 Relative Read Latency: 0 00:13:44.552 Relative Write Throughput: 0 00:13:44.552 Relative Write Latency: 0 00:13:44.552 Idle Power: Not Reported 00:13:44.552 Active Power: Not Reported 00:13:44.552 Non-Operational Permissive Mode: Not Supported 00:13:44.552 00:13:44.552 Health Information 00:13:44.552 ================== 00:13:44.552 Critical Warnings: 00:13:44.552 Available Spare Space: OK 00:13:44.552 Temperature: OK 00:13:44.552 Device Reliability: OK 00:13:44.552 Read Only: No 00:13:44.552 Volatile Memory Backup: OK 00:13:44.552 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:44.552 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:44.552 Available Spare: 0% 00:13:44.552 Available Sp[2024-12-10 04:50:35.606915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:44.552 [2024-12-10 04:50:35.606925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:44.552 [2024-12-10 04:50:35.606950] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:44.552 [2024-12-10 04:50:35.606958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.552 [2024-12-10 04:50:35.606963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.552 [2024-12-10 04:50:35.606969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.552 [2024-12-10 04:50:35.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.552 [2024-12-10 04:50:35.610174] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.552 [2024-12-10 04:50:35.610185] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:44.552 [2024-12-10 04:50:35.611130] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.552 [2024-12-10 04:50:35.611180] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:44.552 [2024-12-10 04:50:35.611186] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:44.552 [2024-12-10 04:50:35.612131] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:44.552 [2024-12-10 04:50:35.612141] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:44.552 [2024-12-10 04:50:35.612193] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:44.552 [2024-12-10 04:50:35.613155] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.552 are Threshold: 0% 00:13:44.552 Life Percentage Used: 0% 00:13:44.552 Data Units Read: 0 00:13:44.552 Data Units Written: 0 00:13:44.552 Host Read Commands: 0 00:13:44.552 Host Write Commands: 0 00:13:44.552 Controller Busy Time: 0 minutes 00:13:44.552 Power Cycles: 0 00:13:44.552 Power On Hours: 0 hours 00:13:44.552 Unsafe Shutdowns: 0 00:13:44.552 Unrecoverable Media Errors: 0 00:13:44.552 Lifetime Error Log Entries: 0 00:13:44.552 Warning Temperature Time: 0 minutes 00:13:44.552 Critical Temperature Time: 0 minutes 00:13:44.552 00:13:44.552 Number of Queues 00:13:44.552 ================ 00:13:44.552 Number of I/O Submission Queues: 127 00:13:44.552 Number of I/O Completion Queues: 127 00:13:44.552 00:13:44.552 Active Namespaces 00:13:44.552 ================= 00:13:44.552 Namespace ID:1 00:13:44.552 Error Recovery Timeout: Unlimited 00:13:44.552 Command Set Identifier: NVM (00h) 00:13:44.552 Deallocate: Supported 00:13:44.552 Deallocated/Unwritten Error: Not Supported 00:13:44.552 Deallocated Read Value: Unknown 00:13:44.552 Deallocate in Write Zeroes: Not Supported 00:13:44.552 Deallocated Guard Field: 0xFFFF 00:13:44.552 Flush: Supported 00:13:44.552 Reservation: Supported 00:13:44.552 Namespace Sharing Capabilities: Multiple Controllers 00:13:44.552 Size (in LBAs): 131072 (0GiB) 00:13:44.552 Capacity (in LBAs): 131072 (0GiB) 00:13:44.552 Utilization (in LBAs): 131072 (0GiB) 00:13:44.552 NGUID: 4389764F6A0546E5BCA093976DF3402F 00:13:44.552 UUID: 4389764f-6a05-46e5-bca0-93976df3402f 00:13:44.552 Thin Provisioning: Not Supported 00:13:44.552 Per-NS Atomic Units: Yes 00:13:44.552 Atomic Boundary Size (Normal): 0 00:13:44.552 Atomic Boundary Size (PFail): 0 00:13:44.552 Atomic Boundary Offset: 0 00:13:44.552 Maximum Single Source Range Length: 65535 00:13:44.552 Maximum Copy Length: 65535 00:13:44.552 Maximum Source Range Count: 1 00:13:44.552 NGUID/EUI64 Never Reused: No 00:13:44.552 Namespace Write Protected: No 00:13:44.552 Number of LBA Formats: 1 00:13:44.552 Current LBA Format: LBA Format #00 00:13:44.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:44.552 00:13:44.552 04:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:44.812 [2024-12-10 04:50:35.839017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.089 Initializing NVMe Controllers 00:13:50.089 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.089 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:50.089 Initialization complete. Launching workers. 00:13:50.089 ======================================================== 00:13:50.089 Latency(us) 00:13:50.089 Device Information : IOPS MiB/s Average min max 00:13:50.089 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39890.49 155.82 3208.38 960.44 7288.22 00:13:50.089 ======================================================== 00:13:50.089 Total : 39890.49 155.82 3208.38 960.44 7288.22 00:13:50.089 00:13:50.089 [2024-12-10 04:50:40.856078] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.089 04:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:50.089 [2024-12-10 04:50:41.095190] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:55.361 Initializing NVMe Controllers 00:13:55.361 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:55.361 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:55.361 Initialization complete. Launching workers. 00:13:55.361 ======================================================== 00:13:55.361 Latency(us) 00:13:55.361 Device Information : IOPS MiB/s Average min max 00:13:55.361 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7983.00 4986.16 10977.15 00:13:55.361 ======================================================== 00:13:55.361 Total : 16051.20 62.70 7983.00 4986.16 10977.15 00:13:55.361 00:13:55.361 [2024-12-10 04:50:46.131287] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:55.361 04:50:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:55.361 [2024-12-10 04:50:46.338261] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.633 [2024-12-10 04:50:51.414491] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.633 Initializing NVMe Controllers 00:14:00.633 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.633 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:00.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:00.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:00.633 Initialization complete. Launching workers. 00:14:00.633 Starting thread on core 2 00:14:00.633 Starting thread on core 3 00:14:00.633 Starting thread on core 1 00:14:00.633 04:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:00.633 [2024-12-10 04:50:51.716588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.923 [2024-12-10 04:50:54.786379] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:03.923 Initializing NVMe Controllers 00:14:03.923 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.923 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:03.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:03.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:03.923 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:03.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:03.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:03.923 Initialization complete. Launching workers. 00:14:03.923 Starting thread on core 1 with urgent priority queue 00:14:03.923 Starting thread on core 2 with urgent priority queue 00:14:03.923 Starting thread on core 3 with urgent priority queue 00:14:03.923 Starting thread on core 0 with urgent priority queue 00:14:03.923 SPDK bdev Controller (SPDK1 ) core 0: 7221.67 IO/s 13.85 secs/100000 ios 00:14:03.923 SPDK bdev Controller (SPDK1 ) core 1: 6533.00 IO/s 15.31 secs/100000 ios 00:14:03.923 SPDK bdev Controller (SPDK1 ) core 2: 6432.33 IO/s 15.55 secs/100000 ios 00:14:03.923 SPDK bdev Controller (SPDK1 ) core 3: 7243.00 IO/s 13.81 secs/100000 ios 00:14:03.923 ======================================================== 00:14:03.923 00:14:03.923 04:50:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:04.181 [2024-12-10 04:50:55.074019] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.181 Initializing NVMe Controllers 00:14:04.181 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.181 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.181 Namespace ID: 1 size: 0GB 00:14:04.181 Initialization complete. 00:14:04.181 INFO: using host memory buffer for IO 00:14:04.181 Hello world! 00:14:04.181 [2024-12-10 04:50:55.108242] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.181 04:50:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:04.474 [2024-12-10 04:50:55.395741] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.517 Initializing NVMe Controllers 00:14:05.517 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.517 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.517 Initialization complete. Launching workers. 00:14:05.517 submit (in ns) avg, min, max = 6794.7, 3097.1, 4000006.7 00:14:05.517 complete (in ns) avg, min, max = 19479.3, 1715.2, 4000619.0 00:14:05.517 00:14:05.517 Submit histogram 00:14:05.517 ================ 00:14:05.517 Range in us Cumulative Count 00:14:05.517 3.093 - 3.109: 0.0061% ( 1) 00:14:05.517 3.124 - 3.139: 0.0182% ( 2) 00:14:05.517 3.139 - 3.154: 0.0486% ( 5) 00:14:05.517 3.154 - 3.170: 0.1397% ( 15) 00:14:05.517 3.170 - 3.185: 0.2308% ( 15) 00:14:05.517 3.185 - 3.200: 0.5769% ( 57) 00:14:05.517 3.200 - 3.215: 1.8704% ( 213) 00:14:05.517 3.215 - 3.230: 4.5121% ( 435) 00:14:05.517 3.230 - 3.246: 7.7913% ( 540) 00:14:05.517 3.246 - 3.261: 11.6658% ( 638) 00:14:05.517 3.261 - 3.276: 17.1130% ( 897) 00:14:05.517 3.276 - 3.291: 23.0582% ( 979) 00:14:05.517 3.291 - 3.307: 28.9002% ( 962) 00:14:05.517 3.307 - 3.322: 35.0094% ( 1006) 00:14:05.517 3.322 - 3.337: 40.9061% ( 971) 00:14:05.517 3.337 - 3.352: 45.8250% ( 810) 00:14:05.517 3.352 - 3.368: 50.6893% ( 801) 00:14:05.517 3.368 - 3.383: 56.7074% ( 991) 00:14:05.517 3.383 - 3.398: 61.8753% ( 851) 00:14:05.517 3.398 - 3.413: 66.5513% ( 770) 00:14:05.517 3.413 - 3.429: 71.8407% ( 871) 00:14:05.517 3.429 - 3.444: 76.3649% ( 745) 00:14:05.517 3.444 - 3.459: 79.7960% ( 565) 00:14:05.517 3.459 - 3.474: 83.1420% ( 551) 00:14:05.517 3.474 - 3.490: 85.5954% ( 404) 00:14:05.517 3.490 - 3.505: 86.9436% ( 222) 00:14:05.517 3.505 - 3.520: 87.7998% ( 141) 00:14:05.517 3.520 - 3.535: 88.5225% ( 119) 00:14:05.517 3.535 - 3.550: 89.2391% ( 118) 00:14:05.517 3.550 - 3.566: 90.0225% ( 129) 00:14:05.517 3.566 - 3.581: 90.7816% ( 125) 00:14:05.517 3.581 - 3.596: 91.5528% ( 127) 00:14:05.517 3.596 - 3.611: 92.6155% ( 175) 00:14:05.517 3.611 - 3.627: 93.5264% ( 150) 00:14:05.517 3.627 - 3.642: 94.4313% ( 149) 00:14:05.517 3.642 - 3.657: 95.1236% ( 114) 00:14:05.517 3.657 - 3.672: 95.7977% ( 111) 00:14:05.517 3.672 - 3.688: 96.6175% ( 135) 00:14:05.517 3.688 - 3.703: 97.2976% ( 112) 00:14:05.517 3.703 - 3.718: 97.8199% ( 86) 00:14:05.517 3.718 - 3.733: 98.2450% ( 70) 00:14:05.517 3.733 - 3.749: 98.5425% ( 49) 00:14:05.517 3.749 - 3.764: 98.8462% ( 50) 00:14:05.517 3.764 - 3.779: 98.9859% ( 23) 00:14:05.517 3.779 - 3.794: 99.1498% ( 27) 00:14:05.517 3.794 - 3.810: 99.2531% ( 17) 00:14:05.517 3.810 - 3.825: 99.3502% ( 16) 00:14:05.517 3.825 - 3.840: 99.3927% ( 7) 00:14:05.517 3.840 - 3.855: 99.4109% ( 3) 00:14:05.517 3.855 - 3.870: 99.4352% ( 4) 00:14:05.517 3.870 - 3.886: 99.4474% ( 2) 00:14:05.517 3.886 - 3.901: 99.4535% ( 1) 00:14:05.517 3.901 - 3.931: 99.4777% ( 4) 00:14:05.517 3.931 - 3.962: 99.5081% ( 5) 00:14:05.517 3.962 - 3.992: 99.5203% ( 2) 00:14:05.517 3.992 - 4.023: 99.5506% ( 5) 00:14:05.517 4.023 - 4.053: 99.5810% ( 5) 00:14:05.517 4.053 - 4.084: 99.5871% ( 1) 00:14:05.517 4.084 - 4.114: 99.5931% ( 1) 00:14:05.517 4.114 - 4.145: 99.6053% ( 2) 00:14:05.517 4.175 - 4.206: 99.6113% ( 1) 00:14:05.517 4.297 - 4.328: 99.6174% ( 1) 00:14:05.517 4.328 - 4.358: 99.6235% ( 1) 00:14:05.517 5.333 - 5.364: 99.6296% ( 1) 00:14:05.517 5.638 - 5.669: 99.6356% ( 1) 00:14:05.517 5.943 - 5.973: 99.6417% ( 1) 00:14:05.517 5.973 - 6.004: 99.6478% ( 1) 00:14:05.517 6.156 - 6.187: 99.6539% ( 1) 00:14:05.517 6.217 - 6.248: 99.6660% ( 2) 00:14:05.517 6.370 - 6.400: 99.6781% ( 2) 00:14:05.517 6.430 - 6.461: 99.6842% ( 1) 00:14:05.517 6.461 - 6.491: 99.6903% ( 1) 00:14:05.517 6.522 - 6.552: 99.7024% ( 2) 00:14:05.517 6.674 - 6.705: 99.7085% ( 1) 00:14:05.517 6.735 - 6.766: 99.7146% ( 1) 00:14:05.517 6.766 - 6.796: 99.7207% ( 1) 00:14:05.517 6.857 - 6.888: 99.7267% ( 1) 00:14:05.517 6.888 - 6.918: 99.7328% ( 1) 00:14:05.517 6.918 - 6.949: 99.7389% ( 1) 00:14:05.517 6.949 - 6.979: 99.7449% ( 1) 00:14:05.517 6.979 - 7.010: 99.7510% ( 1) 00:14:05.517 7.010 - 7.040: 99.7571% ( 1) 00:14:05.518 7.192 - 7.223: 99.7632% ( 1) 00:14:05.518 7.223 - 7.253: 99.7692% ( 1) 00:14:05.518 7.375 - 7.406: 99.7753% ( 1) 00:14:05.518 7.497 - 7.528: 99.7814% ( 1) 00:14:05.518 7.558 - 7.589: 99.7875% ( 1) 00:14:05.518 7.589 - 7.619: 99.7935% ( 1) 00:14:05.518 7.650 - 7.680: 99.7996% ( 1) 00:14:05.518 7.802 - 7.863: 99.8057% ( 1) 00:14:05.518 8.107 - 8.168: 99.8117% ( 1) 00:14:05.518 8.168 - 8.229: 99.8178% ( 1) 00:14:05.518 8.229 - 8.290: 99.8300% ( 2) 00:14:05.518 8.411 - 8.472: 99.8360% ( 1) 00:14:05.518 8.472 - 8.533: 99.8421% ( 1) 00:14:05.518 8.655 - 8.716: 99.8482% ( 1) 00:14:05.518 8.716 - 8.777: 99.8543% ( 1) 00:14:05.518 8.838 - 8.899: 99.8603% ( 1) 00:14:05.518 9.143 - 9.204: 99.8664% ( 1) 00:14:05.518 9.630 - 9.691: 99.8725% ( 1) 00:14:05.518 9.752 - 9.813: 99.8785% ( 1) 00:14:05.518 9.813 - 9.874: 99.8846% ( 1) 00:14:05.518 9.874 - 9.935: 99.8907% ( 1) 00:14:05.518 13.410 - 13.470: 99.8968% ( 1) 00:14:05.518 15.482 - 15.543: 99.9028% ( 1) 00:14:05.518 19.017 - 19.139: 99.9089% ( 1) 00:14:05.518 19.870 - 19.992: 99.9150% ( 1) 00:14:05.518 3994.575 - 4025.783: 100.0000% ( 14) 00:14:05.518 00:14:05.518 Complete histogram 00:14:05.518 ================== 00:14:05.518 Range in us Cumulative Count 00:14:05.518 1.714 - 1.722: 0.1093% ( 18) 00:14:05.518 1.722 - 1.730: 0.4919% ( 63) 00:14:05.518 1.730 - 1.737: 0.9413% ( 74) 00:14:05.518 1.737 - 1.745: 1.0992% ( 26) 00:14:05.518 1.745 - 1.752: 1.2024% ( 17) 00:14:05.518 1.752 - 1.760: 1.3056% ( 17) 00:14:05.518 1.760 - 1.768: 1.6336% ( 54) 00:14:05.518 1.768 - 1.775: 6.8683% ( 862) 00:14:05.518 1.775 - 1.783: 23.7809% ( 2785) 00:14:05.518 1.783 - 1.790: 37.0802% ( 2190) 00:14:05.518 1.790 - 1.798: 41.6834% ( 758) 00:14:05.518 1.798 - 1.806: 44.7380% ( 503) 00:14:05.518 1.806 - 1.813: 48.2359% ( 576) 00:14:05.518 1.813 - 1.821: 50.3553% ( 349) 00:14:05.518 1.821 - 1.829: 52.7358% ( 392) 00:14:05.518 1.829 - 1.836: 62.6647% ( 1635) 00:14:05.518 1.836 - 1.844: 78.1685% ( 2553) 00:14:05.518 1.844 - 1.851: 88.3828% ( 1682) 00:14:05.518 1.851 - 1.859: 92.6338% ( 700) 00:14:05.518 1.859 - 1.867: 94.4131% ( 293) 00:14:05.518 1.867 - 1.874: 95.4090% ( 164) 00:14:05.518 1.874 - 1.882: 96.0406% ( 104) 00:14:05.518 1.882 - 1.890: 96.6539% ( 101) 00:14:05.518 1.890 - 1.897: 96.9454% ( 48) 00:14:05.518 1.897 - 1.905: 97.1640% ( 36) 00:14:05.518 1.905 - 1.912: 97.3705% ( 34) 00:14:05.518 1.912 - 1.920: 97.6195% ( 41) 00:14:05.518 1.920 - 1.928: 97.8563% ( 39) 00:14:05.518 1.928 - 1.935: 98.1296% ( 45) 00:14:05.518 1.935 - 1.943: 98.2875% ( 26) 00:14:05.518 1.943 - 1.950: 98.4150% ( 21) 00:14:05.518 1.950 - 1.966: 98.7612% ( 57) 00:14:05.518 1.966 - 1.981: 98.8826% ( 20) 00:14:05.518 1.981 - 1.996: 98.9069% ( 4) 00:14:05.518 1.996 - 2.011: 98.9191% ( 2) 00:14:05.518 2.011 - 2.027: 99.0101% ( 15) 00:14:05.518 2.027 - 2.042: 99.0587% ( 8) 00:14:05.518 2.042 - 2.057: 99.0891% ( 5) 00:14:05.518 2.072 - 2.088: 99.1255% ( 6) 00:14:05.518 2.088 - 2.103: 99.1377% ( 2) 00:14:05.518 2.133 - 2.149: 99.1437% ( 1) 00:14:05.518 2.149 - 2.164: 99.1741% ( 5) 00:14:05.518 2.164 - 2.179: 99.2045% ( 5) 00:14:05.518 2.179 - 2.194: 99.2105% ( 1) 00:14:05.518 2.225 - 2.240: 99.2288% ( 3) 00:14:05.518 2.240 - 2.255: 99.2409% ( 2) 00:14:05.518 2.270 - 2.286: 99.2470% ( 1) 00:14:05.518 2.469 - 2.484: 99.2531% ( 1) 00:14:05.518 3.688 - 3.703: 99.2591% ( 1) 00:14:05.518 3.718 - 3.733: 99.2652% ( 1) 00:14:05.518 3.764 - 3.779: 99.2713% ( 1) 00:14:05.518 3.810 - 3.825: 99.2773% ( 1) 00:14:05.518 3.840 - 3.855: 99.2834% ( 1) 00:14:05.518 3.901 - 3.931: 99.2895% ( 1) 00:14:05.518 3.931 - 3.962: 99.2956% ( 1) 00:14:05.518 4.175 - 4.206: 99.3016% ( 1) 00:14:05.518 4.267 - 4.2[2024-12-10 04:50:56.417626] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.518 97: 99.3077% ( 1) 00:14:05.518 4.358 - 4.389: 99.3138% ( 1) 00:14:05.518 4.450 - 4.480: 99.3199% ( 1) 00:14:05.518 4.663 - 4.693: 99.3259% ( 1) 00:14:05.518 4.693 - 4.724: 99.3320% ( 1) 00:14:05.518 4.724 - 4.754: 99.3441% ( 2) 00:14:05.518 4.785 - 4.815: 99.3502% ( 1) 00:14:05.518 4.815 - 4.846: 99.3563% ( 1) 00:14:05.518 4.968 - 4.998: 99.3624% ( 1) 00:14:05.518 5.029 - 5.059: 99.3684% ( 1) 00:14:05.518 5.090 - 5.120: 99.3745% ( 1) 00:14:05.518 5.150 - 5.181: 99.3806% ( 1) 00:14:05.518 5.242 - 5.272: 99.3867% ( 1) 00:14:05.518 5.272 - 5.303: 99.3927% ( 1) 00:14:05.518 5.394 - 5.425: 99.3988% ( 1) 00:14:05.518 5.425 - 5.455: 99.4049% ( 1) 00:14:05.518 5.455 - 5.486: 99.4170% ( 2) 00:14:05.518 5.486 - 5.516: 99.4292% ( 2) 00:14:05.518 5.516 - 5.547: 99.4352% ( 1) 00:14:05.518 5.790 - 5.821: 99.4413% ( 1) 00:14:05.518 5.912 - 5.943: 99.4474% ( 1) 00:14:05.518 6.400 - 6.430: 99.4535% ( 1) 00:14:05.518 6.705 - 6.735: 99.4595% ( 1) 00:14:05.518 6.735 - 6.766: 99.4656% ( 1) 00:14:05.518 6.796 - 6.827: 99.4717% ( 1) 00:14:05.518 7.070 - 7.101: 99.4777% ( 1) 00:14:05.518 7.375 - 7.406: 99.4899% ( 2) 00:14:05.518 7.497 - 7.528: 99.4960% ( 1) 00:14:05.518 7.802 - 7.863: 99.5020% ( 1) 00:14:05.518 8.716 - 8.777: 99.5081% ( 1) 00:14:05.518 10.118 - 10.179: 99.5142% ( 1) 00:14:05.518 10.301 - 10.362: 99.5203% ( 1) 00:14:05.518 12.008 - 12.069: 99.5263% ( 1) 00:14:05.518 13.836 - 13.897: 99.5324% ( 1) 00:14:05.518 14.750 - 14.811: 99.5385% ( 1) 00:14:05.518 17.798 - 17.920: 99.5506% ( 2) 00:14:05.518 18.773 - 18.895: 99.5567% ( 1) 00:14:05.518 3417.234 - 3432.838: 99.5628% ( 1) 00:14:05.518 3542.065 - 3557.669: 99.5688% ( 1) 00:14:05.518 3994.575 - 4025.783: 100.0000% ( 71) 00:14:05.518 00:14:05.518 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:05.518 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:05.518 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:05.518 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:05.518 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:05.518 [ 00:14:05.518 { 00:14:05.518 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.518 "subtype": "Discovery", 00:14:05.518 "listen_addresses": [], 00:14:05.518 "allow_any_host": true, 00:14:05.518 "hosts": [] 00:14:05.518 }, 00:14:05.518 { 00:14:05.518 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:05.518 "subtype": "NVMe", 00:14:05.518 "listen_addresses": [ 00:14:05.518 { 00:14:05.518 "trtype": "VFIOUSER", 00:14:05.518 "adrfam": "IPv4", 00:14:05.518 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:05.518 "trsvcid": "0" 00:14:05.518 } 00:14:05.518 ], 00:14:05.518 "allow_any_host": true, 00:14:05.518 "hosts": [], 00:14:05.518 "serial_number": "SPDK1", 00:14:05.518 "model_number": "SPDK bdev Controller", 00:14:05.518 "max_namespaces": 32, 00:14:05.518 "min_cntlid": 1, 00:14:05.518 "max_cntlid": 65519, 00:14:05.518 "namespaces": [ 00:14:05.518 { 00:14:05.518 "nsid": 1, 00:14:05.518 "bdev_name": "Malloc1", 00:14:05.518 "name": "Malloc1", 00:14:05.518 "nguid": "4389764F6A0546E5BCA093976DF3402F", 00:14:05.518 "uuid": "4389764f-6a05-46e5-bca0-93976df3402f" 00:14:05.518 } 00:14:05.518 ] 00:14:05.518 }, 00:14:05.518 { 00:14:05.518 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:05.518 "subtype": "NVMe", 00:14:05.518 "listen_addresses": [ 00:14:05.518 { 00:14:05.518 "trtype": "VFIOUSER", 00:14:05.518 "adrfam": "IPv4", 00:14:05.518 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:05.518 "trsvcid": "0" 00:14:05.518 } 00:14:05.518 ], 00:14:05.518 "allow_any_host": true, 00:14:05.518 "hosts": [], 00:14:05.518 "serial_number": "SPDK2", 00:14:05.518 "model_number": "SPDK bdev Controller", 00:14:05.518 "max_namespaces": 32, 00:14:05.518 "min_cntlid": 1, 00:14:05.518 "max_cntlid": 65519, 00:14:05.518 "namespaces": [ 00:14:05.518 { 00:14:05.518 "nsid": 1, 00:14:05.518 "bdev_name": "Malloc2", 00:14:05.518 "name": "Malloc2", 00:14:05.518 "nguid": "E7BDF6A06B8F4E23BF0E18A60ACBCD4C", 00:14:05.518 "uuid": "e7bdf6a0-6b8f-4e23-bf0e-18a60acbcd4c" 00:14:05.518 } 00:14:05.518 ] 00:14:05.518 } 00:14:05.518 ] 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=591500 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:05.778 [2024-12-10 04:50:56.830571] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.778 Malloc3 00:14:05.778 04:50:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:06.037 [2024-12-10 04:50:57.075334] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:06.037 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.037 Asynchronous Event Request test 00:14:06.037 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.037 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.037 Registering asynchronous event callbacks... 00:14:06.037 Starting namespace attribute notice tests for all controllers... 00:14:06.037 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:06.037 aer_cb - Changed Namespace 00:14:06.038 Cleaning up... 00:14:06.298 [ 00:14:06.298 { 00:14:06.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.298 "subtype": "Discovery", 00:14:06.298 "listen_addresses": [], 00:14:06.298 "allow_any_host": true, 00:14:06.298 "hosts": [] 00:14:06.298 }, 00:14:06.298 { 00:14:06.298 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.298 "subtype": "NVMe", 00:14:06.298 "listen_addresses": [ 00:14:06.298 { 00:14:06.298 "trtype": "VFIOUSER", 00:14:06.298 "adrfam": "IPv4", 00:14:06.298 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.298 "trsvcid": "0" 00:14:06.298 } 00:14:06.298 ], 00:14:06.298 "allow_any_host": true, 00:14:06.298 "hosts": [], 00:14:06.298 "serial_number": "SPDK1", 00:14:06.298 "model_number": "SPDK bdev Controller", 00:14:06.298 "max_namespaces": 32, 00:14:06.298 "min_cntlid": 1, 00:14:06.298 "max_cntlid": 65519, 00:14:06.298 "namespaces": [ 00:14:06.298 { 00:14:06.298 "nsid": 1, 00:14:06.298 "bdev_name": "Malloc1", 00:14:06.298 "name": "Malloc1", 00:14:06.298 "nguid": "4389764F6A0546E5BCA093976DF3402F", 00:14:06.298 "uuid": "4389764f-6a05-46e5-bca0-93976df3402f" 00:14:06.298 }, 00:14:06.298 { 00:14:06.298 "nsid": 2, 00:14:06.298 "bdev_name": "Malloc3", 00:14:06.298 "name": "Malloc3", 00:14:06.298 "nguid": "B2B64F0C33EE419FA75175E2C4F1DF9E", 00:14:06.298 "uuid": "b2b64f0c-33ee-419f-a751-75e2c4f1df9e" 00:14:06.298 } 00:14:06.298 ] 00:14:06.298 }, 00:14:06.298 { 00:14:06.298 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.298 "subtype": "NVMe", 00:14:06.298 "listen_addresses": [ 00:14:06.298 { 00:14:06.298 "trtype": "VFIOUSER", 00:14:06.298 "adrfam": "IPv4", 00:14:06.298 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.298 "trsvcid": "0" 00:14:06.298 } 00:14:06.298 ], 00:14:06.298 "allow_any_host": true, 00:14:06.298 "hosts": [], 00:14:06.298 "serial_number": "SPDK2", 00:14:06.298 "model_number": "SPDK bdev Controller", 00:14:06.298 "max_namespaces": 32, 00:14:06.298 "min_cntlid": 1, 00:14:06.298 "max_cntlid": 65519, 00:14:06.298 "namespaces": [ 00:14:06.298 { 00:14:06.298 "nsid": 1, 00:14:06.298 "bdev_name": "Malloc2", 00:14:06.298 "name": "Malloc2", 00:14:06.298 "nguid": "E7BDF6A06B8F4E23BF0E18A60ACBCD4C", 00:14:06.298 "uuid": "e7bdf6a0-6b8f-4e23-bf0e-18a60acbcd4c" 00:14:06.298 } 00:14:06.298 ] 00:14:06.298 } 00:14:06.298 ] 00:14:06.298 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 591500 00:14:06.298 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:06.298 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.298 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.298 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:06.298 [2024-12-10 04:50:57.329090] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:06.298 [2024-12-10 04:50:57.329136] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591626 ] 00:14:06.298 [2024-12-10 04:50:57.370521] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:06.298 [2024-12-10 04:50:57.375764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.298 [2024-12-10 04:50:57.375787] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f62482f3000 00:14:06.298 [2024-12-10 04:50:57.376762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.377772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.378778] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.379785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.380792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.381807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.382808] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.383817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.298 [2024-12-10 04:50:57.384828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.298 [2024-12-10 04:50:57.384837] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f62482e8000 00:14:06.298 [2024-12-10 04:50:57.385755] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.298 [2024-12-10 04:50:57.395110] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:06.298 [2024-12-10 04:50:57.395132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:06.298 [2024-12-10 04:50:57.399214] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.298 [2024-12-10 04:50:57.399250] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:06.298 [2024-12-10 04:50:57.399316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:06.298 [2024-12-10 04:50:57.399329] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:06.298 [2024-12-10 04:50:57.399333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:06.298 [2024-12-10 04:50:57.400214] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:06.298 [2024-12-10 04:50:57.400225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:06.298 [2024-12-10 04:50:57.400232] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:06.298 [2024-12-10 04:50:57.401220] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.298 [2024-12-10 04:50:57.401229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:06.298 [2024-12-10 04:50:57.401235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.298 [2024-12-10 04:50:57.402224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:06.298 [2024-12-10 04:50:57.402232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.298 [2024-12-10 04:50:57.403234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:06.298 [2024-12-10 04:50:57.403243] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:06.298 [2024-12-10 04:50:57.403247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:06.298 [2024-12-10 04:50:57.403253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.299 [2024-12-10 04:50:57.403360] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:06.299 [2024-12-10 04:50:57.403364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.299 [2024-12-10 04:50:57.403369] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:06.299 [2024-12-10 04:50:57.404240] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:06.299 [2024-12-10 04:50:57.405250] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:06.299 [2024-12-10 04:50:57.406256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.299 [2024-12-10 04:50:57.407257] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.299 [2024-12-10 04:50:57.407296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.299 [2024-12-10 04:50:57.408268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:06.299 [2024-12-10 04:50:57.408276] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.299 [2024-12-10 04:50:57.408280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:06.299 [2024-12-10 04:50:57.408297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:06.299 [2024-12-10 04:50:57.408304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.299 [2024-12-10 04:50:57.408317] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.299 [2024-12-10 04:50:57.408322] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.299 [2024-12-10 04:50:57.408325] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.299 [2024-12-10 04:50:57.408335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.299 [2024-12-10 04:50:57.417175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:06.299 [2024-12-10 04:50:57.417189] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:06.299 [2024-12-10 04:50:57.417194] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:06.299 [2024-12-10 04:50:57.417198] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:06.299 [2024-12-10 04:50:57.417202] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:06.299 [2024-12-10 04:50:57.417206] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:06.299 [2024-12-10 04:50:57.417210] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:06.299 [2024-12-10 04:50:57.417214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:06.299 [2024-12-10 04:50:57.417221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.299 [2024-12-10 04:50:57.417230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:06.299 [2024-12-10 04:50:57.425173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:06.299 [2024-12-10 04:50:57.425184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.299 [2024-12-10 04:50:57.425191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.299 [2024-12-10 04:50:57.425201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.299 [2024-12-10 04:50:57.425208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.299 [2024-12-10 04:50:57.425212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.299 [2024-12-10 04:50:57.425220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.299 [2024-12-10 04:50:57.425228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.433172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.433181] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:06.560 [2024-12-10 04:50:57.433185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.433191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.433196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.433204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.441171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.441228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.441236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.441242] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:06.560 [2024-12-10 04:50:57.441246] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:06.560 [2024-12-10 04:50:57.441249] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.560 [2024-12-10 04:50:57.441255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.449172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.449182] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:06.560 [2024-12-10 04:50:57.449193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.449200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.449206] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.560 [2024-12-10 04:50:57.449210] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.560 [2024-12-10 04:50:57.449213] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.560 [2024-12-10 04:50:57.449218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.457173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.457186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.457193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.457200] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.560 [2024-12-10 04:50:57.457203] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.560 [2024-12-10 04:50:57.457206] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.560 [2024-12-10 04:50:57.457212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.465173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.465182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465215] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:06.560 [2024-12-10 04:50:57.465219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:06.560 [2024-12-10 04:50:57.465224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:06.560 [2024-12-10 04:50:57.465238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.473172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.473185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.481174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.481186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.489172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.489183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.497189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:06.560 [2024-12-10 04:50:57.497193] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:06.560 [2024-12-10 04:50:57.497196] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:06.560 [2024-12-10 04:50:57.497199] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:06.560 [2024-12-10 04:50:57.497202] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:06.560 [2024-12-10 04:50:57.497208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:06.560 [2024-12-10 04:50:57.497214] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:06.560 [2024-12-10 04:50:57.497218] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:06.560 [2024-12-10 04:50:57.497221] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.560 [2024-12-10 04:50:57.497226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.497232] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:06.560 [2024-12-10 04:50:57.497236] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.560 [2024-12-10 04:50:57.497239] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.560 [2024-12-10 04:50:57.497244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.497250] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:06.560 [2024-12-10 04:50:57.497254] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:06.560 [2024-12-10 04:50:57.497257] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.560 [2024-12-10 04:50:57.497262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:06.560 [2024-12-10 04:50:57.505171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.505184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.505194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:06.560 [2024-12-10 04:50:57.505200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:06.560 ===================================================== 00:14:06.560 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.560 ===================================================== 00:14:06.560 Controller Capabilities/Features 00:14:06.560 ================================ 00:14:06.560 Vendor ID: 4e58 00:14:06.560 Subsystem Vendor ID: 4e58 00:14:06.560 Serial Number: SPDK2 00:14:06.560 Model Number: SPDK bdev Controller 00:14:06.560 Firmware Version: 25.01 00:14:06.560 Recommended Arb Burst: 6 00:14:06.560 IEEE OUI Identifier: 8d 6b 50 00:14:06.560 Multi-path I/O 00:14:06.560 May have multiple subsystem ports: Yes 00:14:06.560 May have multiple controllers: Yes 00:14:06.560 Associated with SR-IOV VF: No 00:14:06.560 Max Data Transfer Size: 131072 00:14:06.560 Max Number of Namespaces: 32 00:14:06.560 Max Number of I/O Queues: 127 00:14:06.560 NVMe Specification Version (VS): 1.3 00:14:06.560 NVMe Specification Version (Identify): 1.3 00:14:06.560 Maximum Queue Entries: 256 00:14:06.560 Contiguous Queues Required: Yes 00:14:06.560 Arbitration Mechanisms Supported 00:14:06.560 Weighted Round Robin: Not Supported 00:14:06.561 Vendor Specific: Not Supported 00:14:06.561 Reset Timeout: 15000 ms 00:14:06.561 Doorbell Stride: 4 bytes 00:14:06.561 NVM Subsystem Reset: Not Supported 00:14:06.561 Command Sets Supported 00:14:06.561 NVM Command Set: Supported 00:14:06.561 Boot Partition: Not Supported 00:14:06.561 Memory Page Size Minimum: 4096 bytes 00:14:06.561 Memory Page Size Maximum: 4096 bytes 00:14:06.561 Persistent Memory Region: Not Supported 00:14:06.561 Optional Asynchronous Events Supported 00:14:06.561 Namespace Attribute Notices: Supported 00:14:06.561 Firmware Activation Notices: Not Supported 00:14:06.561 ANA Change Notices: Not Supported 00:14:06.561 PLE Aggregate Log Change Notices: Not Supported 00:14:06.561 LBA Status Info Alert Notices: Not Supported 00:14:06.561 EGE Aggregate Log Change Notices: Not Supported 00:14:06.561 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.561 Zone Descriptor Change Notices: Not Supported 00:14:06.561 Discovery Log Change Notices: Not Supported 00:14:06.561 Controller Attributes 00:14:06.561 128-bit Host Identifier: Supported 00:14:06.561 Non-Operational Permissive Mode: Not Supported 00:14:06.561 NVM Sets: Not Supported 00:14:06.561 Read Recovery Levels: Not Supported 00:14:06.561 Endurance Groups: Not Supported 00:14:06.561 Predictable Latency Mode: Not Supported 00:14:06.561 Traffic Based Keep ALive: Not Supported 00:14:06.561 Namespace Granularity: Not Supported 00:14:06.561 SQ Associations: Not Supported 00:14:06.561 UUID List: Not Supported 00:14:06.561 Multi-Domain Subsystem: Not Supported 00:14:06.561 Fixed Capacity Management: Not Supported 00:14:06.561 Variable Capacity Management: Not Supported 00:14:06.561 Delete Endurance Group: Not Supported 00:14:06.561 Delete NVM Set: Not Supported 00:14:06.561 Extended LBA Formats Supported: Not Supported 00:14:06.561 Flexible Data Placement Supported: Not Supported 00:14:06.561 00:14:06.561 Controller Memory Buffer Support 00:14:06.561 ================================ 00:14:06.561 Supported: No 00:14:06.561 00:14:06.561 Persistent Memory Region Support 00:14:06.561 ================================ 00:14:06.561 Supported: No 00:14:06.561 00:14:06.561 Admin Command Set Attributes 00:14:06.561 ============================ 00:14:06.561 Security Send/Receive: Not Supported 00:14:06.561 Format NVM: Not Supported 00:14:06.561 Firmware Activate/Download: Not Supported 00:14:06.561 Namespace Management: Not Supported 00:14:06.561 Device Self-Test: Not Supported 00:14:06.561 Directives: Not Supported 00:14:06.561 NVMe-MI: Not Supported 00:14:06.561 Virtualization Management: Not Supported 00:14:06.561 Doorbell Buffer Config: Not Supported 00:14:06.561 Get LBA Status Capability: Not Supported 00:14:06.561 Command & Feature Lockdown Capability: Not Supported 00:14:06.561 Abort Command Limit: 4 00:14:06.561 Async Event Request Limit: 4 00:14:06.561 Number of Firmware Slots: N/A 00:14:06.561 Firmware Slot 1 Read-Only: N/A 00:14:06.561 Firmware Activation Without Reset: N/A 00:14:06.561 Multiple Update Detection Support: N/A 00:14:06.561 Firmware Update Granularity: No Information Provided 00:14:06.561 Per-Namespace SMART Log: No 00:14:06.561 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.561 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:06.561 Command Effects Log Page: Supported 00:14:06.561 Get Log Page Extended Data: Supported 00:14:06.561 Telemetry Log Pages: Not Supported 00:14:06.561 Persistent Event Log Pages: Not Supported 00:14:06.561 Supported Log Pages Log Page: May Support 00:14:06.561 Commands Supported & Effects Log Page: Not Supported 00:14:06.561 Feature Identifiers & Effects Log Page:May Support 00:14:06.561 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.561 Data Area 4 for Telemetry Log: Not Supported 00:14:06.561 Error Log Page Entries Supported: 128 00:14:06.561 Keep Alive: Supported 00:14:06.561 Keep Alive Granularity: 10000 ms 00:14:06.561 00:14:06.561 NVM Command Set Attributes 00:14:06.561 ========================== 00:14:06.561 Submission Queue Entry Size 00:14:06.561 Max: 64 00:14:06.561 Min: 64 00:14:06.561 Completion Queue Entry Size 00:14:06.561 Max: 16 00:14:06.561 Min: 16 00:14:06.561 Number of Namespaces: 32 00:14:06.561 Compare Command: Supported 00:14:06.561 Write Uncorrectable Command: Not Supported 00:14:06.561 Dataset Management Command: Supported 00:14:06.561 Write Zeroes Command: Supported 00:14:06.561 Set Features Save Field: Not Supported 00:14:06.561 Reservations: Not Supported 00:14:06.561 Timestamp: Not Supported 00:14:06.561 Copy: Supported 00:14:06.561 Volatile Write Cache: Present 00:14:06.561 Atomic Write Unit (Normal): 1 00:14:06.561 Atomic Write Unit (PFail): 1 00:14:06.561 Atomic Compare & Write Unit: 1 00:14:06.561 Fused Compare & Write: Supported 00:14:06.561 Scatter-Gather List 00:14:06.561 SGL Command Set: Supported (Dword aligned) 00:14:06.561 SGL Keyed: Not Supported 00:14:06.561 SGL Bit Bucket Descriptor: Not Supported 00:14:06.561 SGL Metadata Pointer: Not Supported 00:14:06.561 Oversized SGL: Not Supported 00:14:06.561 SGL Metadata Address: Not Supported 00:14:06.561 SGL Offset: Not Supported 00:14:06.561 Transport SGL Data Block: Not Supported 00:14:06.561 Replay Protected Memory Block: Not Supported 00:14:06.561 00:14:06.561 Firmware Slot Information 00:14:06.561 ========================= 00:14:06.561 Active slot: 1 00:14:06.561 Slot 1 Firmware Revision: 25.01 00:14:06.561 00:14:06.561 00:14:06.561 Commands Supported and Effects 00:14:06.561 ============================== 00:14:06.561 Admin Commands 00:14:06.561 -------------- 00:14:06.561 Get Log Page (02h): Supported 00:14:06.561 Identify (06h): Supported 00:14:06.561 Abort (08h): Supported 00:14:06.561 Set Features (09h): Supported 00:14:06.561 Get Features (0Ah): Supported 00:14:06.561 Asynchronous Event Request (0Ch): Supported 00:14:06.561 Keep Alive (18h): Supported 00:14:06.561 I/O Commands 00:14:06.561 ------------ 00:14:06.561 Flush (00h): Supported LBA-Change 00:14:06.561 Write (01h): Supported LBA-Change 00:14:06.561 Read (02h): Supported 00:14:06.561 Compare (05h): Supported 00:14:06.561 Write Zeroes (08h): Supported LBA-Change 00:14:06.561 Dataset Management (09h): Supported LBA-Change 00:14:06.561 Copy (19h): Supported LBA-Change 00:14:06.561 00:14:06.561 Error Log 00:14:06.561 ========= 00:14:06.561 00:14:06.561 Arbitration 00:14:06.561 =========== 00:14:06.561 Arbitration Burst: 1 00:14:06.561 00:14:06.561 Power Management 00:14:06.561 ================ 00:14:06.561 Number of Power States: 1 00:14:06.561 Current Power State: Power State #0 00:14:06.561 Power State #0: 00:14:06.561 Max Power: 0.00 W 00:14:06.561 Non-Operational State: Operational 00:14:06.561 Entry Latency: Not Reported 00:14:06.561 Exit Latency: Not Reported 00:14:06.561 Relative Read Throughput: 0 00:14:06.561 Relative Read Latency: 0 00:14:06.561 Relative Write Throughput: 0 00:14:06.561 Relative Write Latency: 0 00:14:06.561 Idle Power: Not Reported 00:14:06.561 Active Power: Not Reported 00:14:06.561 Non-Operational Permissive Mode: Not Supported 00:14:06.561 00:14:06.561 Health Information 00:14:06.561 ================== 00:14:06.561 Critical Warnings: 00:14:06.561 Available Spare Space: OK 00:14:06.561 Temperature: OK 00:14:06.561 Device Reliability: OK 00:14:06.561 Read Only: No 00:14:06.561 Volatile Memory Backup: OK 00:14:06.561 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:06.561 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:06.561 Available Spare: 0% 00:14:06.561 Available Sp[2024-12-10 04:50:57.505288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:06.561 [2024-12-10 04:50:57.513171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:06.561 [2024-12-10 04:50:57.513201] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:06.561 [2024-12-10 04:50:57.513209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.561 [2024-12-10 04:50:57.513215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.561 [2024-12-10 04:50:57.513220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.561 [2024-12-10 04:50:57.513227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.561 [2024-12-10 04:50:57.513275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.561 [2024-12-10 04:50:57.513286] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:06.561 [2024-12-10 04:50:57.514284] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.561 [2024-12-10 04:50:57.514326] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:06.562 [2024-12-10 04:50:57.514332] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:06.562 [2024-12-10 04:50:57.515286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:06.562 [2024-12-10 04:50:57.515296] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:06.562 [2024-12-10 04:50:57.515341] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:06.562 [2024-12-10 04:50:57.516304] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.562 are Threshold: 0% 00:14:06.562 Life Percentage Used: 0% 00:14:06.562 Data Units Read: 0 00:14:06.562 Data Units Written: 0 00:14:06.562 Host Read Commands: 0 00:14:06.562 Host Write Commands: 0 00:14:06.562 Controller Busy Time: 0 minutes 00:14:06.562 Power Cycles: 0 00:14:06.562 Power On Hours: 0 hours 00:14:06.562 Unsafe Shutdowns: 0 00:14:06.562 Unrecoverable Media Errors: 0 00:14:06.562 Lifetime Error Log Entries: 0 00:14:06.562 Warning Temperature Time: 0 minutes 00:14:06.562 Critical Temperature Time: 0 minutes 00:14:06.562 00:14:06.562 Number of Queues 00:14:06.562 ================ 00:14:06.562 Number of I/O Submission Queues: 127 00:14:06.562 Number of I/O Completion Queues: 127 00:14:06.562 00:14:06.562 Active Namespaces 00:14:06.562 ================= 00:14:06.562 Namespace ID:1 00:14:06.562 Error Recovery Timeout: Unlimited 00:14:06.562 Command Set Identifier: NVM (00h) 00:14:06.562 Deallocate: Supported 00:14:06.562 Deallocated/Unwritten Error: Not Supported 00:14:06.562 Deallocated Read Value: Unknown 00:14:06.562 Deallocate in Write Zeroes: Not Supported 00:14:06.562 Deallocated Guard Field: 0xFFFF 00:14:06.562 Flush: Supported 00:14:06.562 Reservation: Supported 00:14:06.562 Namespace Sharing Capabilities: Multiple Controllers 00:14:06.562 Size (in LBAs): 131072 (0GiB) 00:14:06.562 Capacity (in LBAs): 131072 (0GiB) 00:14:06.562 Utilization (in LBAs): 131072 (0GiB) 00:14:06.562 NGUID: E7BDF6A06B8F4E23BF0E18A60ACBCD4C 00:14:06.562 UUID: e7bdf6a0-6b8f-4e23-bf0e-18a60acbcd4c 00:14:06.562 Thin Provisioning: Not Supported 00:14:06.562 Per-NS Atomic Units: Yes 00:14:06.562 Atomic Boundary Size (Normal): 0 00:14:06.562 Atomic Boundary Size (PFail): 0 00:14:06.562 Atomic Boundary Offset: 0 00:14:06.562 Maximum Single Source Range Length: 65535 00:14:06.562 Maximum Copy Length: 65535 00:14:06.562 Maximum Source Range Count: 1 00:14:06.562 NGUID/EUI64 Never Reused: No 00:14:06.562 Namespace Write Protected: No 00:14:06.562 Number of LBA Formats: 1 00:14:06.562 Current LBA Format: LBA Format #00 00:14:06.562 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:06.562 00:14:06.562 04:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:06.821 [2024-12-10 04:50:57.743394] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:12.094 Initializing NVMe Controllers 00:14:12.094 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:12.094 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:12.094 Initialization complete. Launching workers. 00:14:12.094 ======================================================== 00:14:12.094 Latency(us) 00:14:12.094 Device Information : IOPS MiB/s Average min max 00:14:12.094 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39894.40 155.84 3208.59 962.98 10342.04 00:14:12.094 ======================================================== 00:14:12.094 Total : 39894.40 155.84 3208.59 962.98 10342.04 00:14:12.094 00:14:12.094 [2024-12-10 04:51:02.844439] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.094 04:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:12.094 [2024-12-10 04:51:03.079090] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:17.365 Initializing NVMe Controllers 00:14:17.365 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:17.365 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:17.365 Initialization complete. Launching workers. 00:14:17.365 ======================================================== 00:14:17.365 Latency(us) 00:14:17.365 Device Information : IOPS MiB/s Average min max 00:14:17.365 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.34 156.01 3204.86 984.48 7576.40 00:14:17.365 ======================================================== 00:14:17.365 Total : 39937.34 156.01 3204.86 984.48 7576.40 00:14:17.365 00:14:17.365 [2024-12-10 04:51:08.099053] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:17.365 04:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:17.365 [2024-12-10 04:51:08.307522] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.640 [2024-12-10 04:51:13.439266] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.640 Initializing NVMe Controllers 00:14:22.640 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.640 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:22.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:22.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:22.640 Initialization complete. Launching workers. 00:14:22.640 Starting thread on core 2 00:14:22.640 Starting thread on core 3 00:14:22.640 Starting thread on core 1 00:14:22.640 04:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:22.640 [2024-12-10 04:51:13.738565] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.835 [2024-12-10 04:51:17.429383] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.835 Initializing NVMe Controllers 00:14:26.835 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.835 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.835 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:26.835 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:26.835 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:26.835 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:26.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:26.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:26.835 Initialization complete. Launching workers. 00:14:26.835 Starting thread on core 1 with urgent priority queue 00:14:26.835 Starting thread on core 2 with urgent priority queue 00:14:26.835 Starting thread on core 3 with urgent priority queue 00:14:26.835 Starting thread on core 0 with urgent priority queue 00:14:26.835 SPDK bdev Controller (SPDK2 ) core 0: 4947.33 IO/s 20.21 secs/100000 ios 00:14:26.835 SPDK bdev Controller (SPDK2 ) core 1: 4743.67 IO/s 21.08 secs/100000 ios 00:14:26.835 SPDK bdev Controller (SPDK2 ) core 2: 4502.33 IO/s 22.21 secs/100000 ios 00:14:26.835 SPDK bdev Controller (SPDK2 ) core 3: 4648.33 IO/s 21.51 secs/100000 ios 00:14:26.835 ======================================================== 00:14:26.835 00:14:26.835 04:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:26.835 [2024-12-10 04:51:17.714640] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.835 Initializing NVMe Controllers 00:14:26.835 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.835 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.835 Namespace ID: 1 size: 0GB 00:14:26.835 Initialization complete. 00:14:26.835 INFO: using host memory buffer for IO 00:14:26.835 Hello world! 00:14:26.835 [2024-12-10 04:51:17.724712] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.835 04:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:27.094 [2024-12-10 04:51:18.003522] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.031 Initializing NVMe Controllers 00:14:28.032 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.032 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.032 Initialization complete. Launching workers. 00:14:28.032 submit (in ns) avg, min, max = 6484.9, 3180.0, 4001317.1 00:14:28.032 complete (in ns) avg, min, max = 20487.1, 1755.2, 7985401.0 00:14:28.032 00:14:28.032 Submit histogram 00:14:28.032 ================ 00:14:28.032 Range in us Cumulative Count 00:14:28.032 3.170 - 3.185: 0.0061% ( 1) 00:14:28.032 3.185 - 3.200: 0.1457% ( 23) 00:14:28.032 3.200 - 3.215: 0.7165% ( 94) 00:14:28.032 3.215 - 3.230: 2.2649% ( 255) 00:14:28.032 3.230 - 3.246: 4.4204% ( 355) 00:14:28.032 3.246 - 3.261: 8.1547% ( 615) 00:14:28.032 3.261 - 3.276: 13.6802% ( 910) 00:14:28.032 3.276 - 3.291: 19.7948% ( 1007) 00:14:28.032 3.291 - 3.307: 25.7696% ( 984) 00:14:28.032 3.307 - 3.322: 32.4610% ( 1102) 00:14:28.032 3.322 - 3.337: 39.1038% ( 1094) 00:14:28.032 3.337 - 3.352: 44.5018% ( 889) 00:14:28.032 3.352 - 3.368: 49.0619% ( 751) 00:14:28.032 3.368 - 3.383: 54.1563% ( 839) 00:14:28.032 3.383 - 3.398: 59.0564% ( 807) 00:14:28.032 3.398 - 3.413: 64.0780% ( 827) 00:14:28.032 3.413 - 3.429: 70.0710% ( 987) 00:14:28.032 3.429 - 3.444: 75.2505% ( 853) 00:14:28.032 3.444 - 3.459: 79.6588% ( 726) 00:14:28.032 3.459 - 3.474: 83.1987% ( 583) 00:14:28.032 3.474 - 3.490: 85.7490% ( 420) 00:14:28.032 3.490 - 3.505: 87.2366% ( 245) 00:14:28.032 3.505 - 3.520: 88.1171% ( 145) 00:14:28.032 3.520 - 3.535: 88.6696% ( 91) 00:14:28.032 3.535 - 3.550: 89.1857% ( 85) 00:14:28.032 3.550 - 3.566: 89.9569% ( 127) 00:14:28.032 3.566 - 3.581: 90.7463% ( 130) 00:14:28.032 3.581 - 3.596: 91.5660% ( 135) 00:14:28.032 3.596 - 3.611: 92.4646% ( 148) 00:14:28.032 3.611 - 3.627: 93.3026% ( 138) 00:14:28.032 3.627 - 3.642: 94.1162% ( 134) 00:14:28.032 3.642 - 3.657: 94.9542% ( 138) 00:14:28.032 3.657 - 3.672: 95.7192% ( 126) 00:14:28.032 3.672 - 3.688: 96.5390% ( 135) 00:14:28.032 3.688 - 3.703: 97.1826% ( 106) 00:14:28.032 3.703 - 3.718: 97.7230% ( 89) 00:14:28.032 3.718 - 3.733: 98.2330% ( 84) 00:14:28.032 3.733 - 3.749: 98.5791% ( 57) 00:14:28.032 3.749 - 3.764: 98.9131% ( 55) 00:14:28.032 3.764 - 3.779: 99.1317% ( 36) 00:14:28.032 3.779 - 3.794: 99.2774% ( 24) 00:14:28.032 3.794 - 3.810: 99.4292% ( 25) 00:14:28.032 3.810 - 3.825: 99.4960% ( 11) 00:14:28.032 3.825 - 3.840: 99.5142% ( 3) 00:14:28.032 3.840 - 3.855: 99.5446% ( 5) 00:14:28.032 3.855 - 3.870: 99.5689% ( 4) 00:14:28.032 3.870 - 3.886: 99.5750% ( 1) 00:14:28.032 5.211 - 5.242: 99.5810% ( 1) 00:14:28.032 5.272 - 5.303: 99.5871% ( 1) 00:14:28.032 5.303 - 5.333: 99.5932% ( 1) 00:14:28.032 5.364 - 5.394: 99.5992% ( 1) 00:14:28.032 5.394 - 5.425: 99.6053% ( 1) 00:14:28.032 5.425 - 5.455: 99.6235% ( 3) 00:14:28.032 5.516 - 5.547: 99.6357% ( 2) 00:14:28.032 5.547 - 5.577: 99.6418% ( 1) 00:14:28.032 5.577 - 5.608: 99.6478% ( 1) 00:14:28.032 5.699 - 5.730: 99.6539% ( 1) 00:14:28.032 5.790 - 5.821: 99.6600% ( 1) 00:14:28.032 5.851 - 5.882: 99.6660% ( 1) 00:14:28.032 5.912 - 5.943: 99.6782% ( 2) 00:14:28.032 6.187 - 6.217: 99.6843% ( 1) 00:14:28.032 6.248 - 6.278: 99.6903% ( 1) 00:14:28.032 6.278 - 6.309: 99.6964% ( 1) 00:14:28.032 6.339 - 6.370: 99.7025% ( 1) 00:14:28.032 6.552 - 6.583: 99.7146% ( 2) 00:14:28.032 6.705 - 6.735: 99.7207% ( 1) 00:14:28.032 6.766 - 6.796: 99.7268% ( 1) 00:14:28.032 6.796 - 6.827: 99.7328% ( 1) 00:14:28.032 6.949 - 6.979: 99.7510% ( 3) 00:14:28.032 6.979 - 7.010: 99.7571% ( 1) 00:14:28.032 7.010 - 7.040: 99.7632% ( 1) 00:14:28.032 7.040 - 7.070: 99.7693% ( 1) 00:14:28.032 7.070 - 7.101: 99.7753% ( 1) 00:14:28.032 7.131 - 7.162: 99.7814% ( 1) 00:14:28.032 7.436 - 7.467: 99.7936% ( 2) 00:14:28.032 7.650 - 7.680: 99.7996% ( 1) 00:14:28.032 7.771 - 7.802: 99.8178% ( 3) 00:14:28.032 7.985 - 8.046: 99.8239% ( 1) 00:14:28.032 8.168 - 8.229: 99.8300% ( 1) 00:14:28.032 [2024-12-10 04:51:19.111181] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.032 8.350 - 8.411: 99.8361% ( 1) 00:14:28.032 8.411 - 8.472: 99.8421% ( 1) 00:14:28.032 8.472 - 8.533: 99.8482% ( 1) 00:14:28.032 8.533 - 8.594: 99.8603% ( 2) 00:14:28.032 8.594 - 8.655: 99.8664% ( 1) 00:14:28.032 8.655 - 8.716: 99.8725% ( 1) 00:14:28.032 9.021 - 9.082: 99.8786% ( 1) 00:14:28.032 9.143 - 9.204: 99.8907% ( 2) 00:14:28.032 9.265 - 9.326: 99.8968% ( 1) 00:14:28.032 9.448 - 9.509: 99.9089% ( 2) 00:14:28.032 10.423 - 10.484: 99.9150% ( 1) 00:14:28.032 15.360 - 15.421: 99.9211% ( 1) 00:14:28.032 2808.686 - 2824.290: 99.9271% ( 1) 00:14:28.032 3994.575 - 4025.783: 100.0000% ( 12) 00:14:28.032 00:14:28.032 Complete histogram 00:14:28.032 ================== 00:14:28.032 Range in us Cumulative Count 00:14:28.032 1.752 - 1.760: 0.1700% ( 28) 00:14:28.032 1.760 - 1.768: 3.6128% ( 567) 00:14:28.032 1.768 - 1.775: 19.9344% ( 2688) 00:14:28.032 1.775 - 1.783: 40.7614% ( 3430) 00:14:28.032 1.783 - 1.790: 51.1021% ( 1703) 00:14:28.032 1.790 - 1.798: 54.5935% ( 575) 00:14:28.032 1.798 - 1.806: 57.0101% ( 398) 00:14:28.032 1.806 - 1.813: 60.8598% ( 634) 00:14:28.032 1.813 - 1.821: 72.0141% ( 1837) 00:14:28.032 1.821 - 1.829: 84.9475% ( 2130) 00:14:28.032 1.829 - 1.836: 91.7178% ( 1115) 00:14:28.032 1.836 - 1.844: 94.2377% ( 415) 00:14:28.032 1.844 - 1.851: 96.0653% ( 301) 00:14:28.032 1.851 - 1.859: 97.2980% ( 203) 00:14:28.032 1.859 - 1.867: 97.9659% ( 110) 00:14:28.032 1.867 - 1.874: 98.2573% ( 48) 00:14:28.032 1.874 - 1.882: 98.4881% ( 38) 00:14:28.032 1.882 - 1.890: 98.7067% ( 36) 00:14:28.032 1.890 - 1.897: 98.8888% ( 30) 00:14:28.032 1.897 - 1.905: 99.0831% ( 32) 00:14:28.032 1.905 - 1.912: 99.1803% ( 16) 00:14:28.032 1.912 - 1.920: 99.2349% ( 9) 00:14:28.032 1.920 - 1.928: 99.2774% ( 7) 00:14:28.032 1.928 - 1.935: 99.3078% ( 5) 00:14:28.032 1.935 - 1.943: 99.3382% ( 5) 00:14:28.032 1.943 - 1.950: 99.3503% ( 2) 00:14:28.032 1.966 - 1.981: 99.3624% ( 2) 00:14:28.032 1.981 - 1.996: 99.3685% ( 1) 00:14:28.032 2.011 - 2.027: 99.3746% ( 1) 00:14:28.032 2.057 - 2.072: 99.3807% ( 1) 00:14:28.032 2.179 - 2.194: 99.3867% ( 1) 00:14:28.032 3.962 - 3.992: 99.3928% ( 1) 00:14:28.032 4.145 - 4.175: 99.3989% ( 1) 00:14:28.032 4.632 - 4.663: 99.4049% ( 1) 00:14:28.032 4.968 - 4.998: 99.4110% ( 1) 00:14:28.032 5.120 - 5.150: 99.4171% ( 1) 00:14:28.032 5.150 - 5.181: 99.4232% ( 1) 00:14:28.032 5.364 - 5.394: 99.4292% ( 1) 00:14:28.032 5.425 - 5.455: 99.4353% ( 1) 00:14:28.032 5.455 - 5.486: 99.4414% ( 1) 00:14:28.032 5.973 - 6.004: 99.4474% ( 1) 00:14:28.032 6.065 - 6.095: 99.4535% ( 1) 00:14:28.032 6.278 - 6.309: 99.4596% ( 1) 00:14:28.032 6.309 - 6.339: 99.4657% ( 1) 00:14:28.032 6.888 - 6.918: 99.4717% ( 1) 00:14:28.032 6.918 - 6.949: 99.4778% ( 1) 00:14:28.032 6.979 - 7.010: 99.4839% ( 1) 00:14:28.032 7.253 - 7.284: 99.4900% ( 1) 00:14:28.032 7.314 - 7.345: 99.4960% ( 1) 00:14:28.032 7.406 - 7.436: 99.5021% ( 1) 00:14:28.032 7.680 - 7.710: 99.5082% ( 1) 00:14:28.032 7.863 - 7.924: 99.5142% ( 1) 00:14:28.032 11.642 - 11.703: 99.5203% ( 1) 00:14:28.032 17.676 - 17.798: 99.5264% ( 1) 00:14:28.032 17.798 - 17.920: 99.5325% ( 1) 00:14:28.032 998.644 - 1006.446: 99.5446% ( 2) 00:14:28.032 3401.630 - 3417.234: 99.5507% ( 1) 00:14:28.032 3604.480 - 3620.084: 99.5567% ( 1) 00:14:28.032 3994.575 - 4025.783: 99.9879% ( 71) 00:14:28.032 6990.507 - 7021.714: 99.9939% ( 1) 00:14:28.032 7957.943 - 7989.150: 100.0000% ( 1) 00:14:28.032 00:14:28.032 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:28.032 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:28.032 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:28.032 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:28.032 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.292 [ 00:14:28.292 { 00:14:28.292 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:28.292 "subtype": "Discovery", 00:14:28.292 "listen_addresses": [], 00:14:28.292 "allow_any_host": true, 00:14:28.292 "hosts": [] 00:14:28.292 }, 00:14:28.292 { 00:14:28.292 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:28.292 "subtype": "NVMe", 00:14:28.292 "listen_addresses": [ 00:14:28.292 { 00:14:28.292 "trtype": "VFIOUSER", 00:14:28.292 "adrfam": "IPv4", 00:14:28.292 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:28.292 "trsvcid": "0" 00:14:28.292 } 00:14:28.292 ], 00:14:28.292 "allow_any_host": true, 00:14:28.292 "hosts": [], 00:14:28.292 "serial_number": "SPDK1", 00:14:28.292 "model_number": "SPDK bdev Controller", 00:14:28.292 "max_namespaces": 32, 00:14:28.292 "min_cntlid": 1, 00:14:28.292 "max_cntlid": 65519, 00:14:28.292 "namespaces": [ 00:14:28.292 { 00:14:28.292 "nsid": 1, 00:14:28.292 "bdev_name": "Malloc1", 00:14:28.292 "name": "Malloc1", 00:14:28.292 "nguid": "4389764F6A0546E5BCA093976DF3402F", 00:14:28.292 "uuid": "4389764f-6a05-46e5-bca0-93976df3402f" 00:14:28.292 }, 00:14:28.292 { 00:14:28.292 "nsid": 2, 00:14:28.292 "bdev_name": "Malloc3", 00:14:28.292 "name": "Malloc3", 00:14:28.292 "nguid": "B2B64F0C33EE419FA75175E2C4F1DF9E", 00:14:28.292 "uuid": "b2b64f0c-33ee-419f-a751-75e2c4f1df9e" 00:14:28.292 } 00:14:28.292 ] 00:14:28.292 }, 00:14:28.292 { 00:14:28.292 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:28.292 "subtype": "NVMe", 00:14:28.292 "listen_addresses": [ 00:14:28.292 { 00:14:28.292 "trtype": "VFIOUSER", 00:14:28.292 "adrfam": "IPv4", 00:14:28.292 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:28.292 "trsvcid": "0" 00:14:28.292 } 00:14:28.292 ], 00:14:28.292 "allow_any_host": true, 00:14:28.292 "hosts": [], 00:14:28.292 "serial_number": "SPDK2", 00:14:28.292 "model_number": "SPDK bdev Controller", 00:14:28.292 "max_namespaces": 32, 00:14:28.292 "min_cntlid": 1, 00:14:28.292 "max_cntlid": 65519, 00:14:28.292 "namespaces": [ 00:14:28.292 { 00:14:28.292 "nsid": 1, 00:14:28.292 "bdev_name": "Malloc2", 00:14:28.292 "name": "Malloc2", 00:14:28.292 "nguid": "E7BDF6A06B8F4E23BF0E18A60ACBCD4C", 00:14:28.292 "uuid": "e7bdf6a0-6b8f-4e23-bf0e-18a60acbcd4c" 00:14:28.292 } 00:14:28.292 ] 00:14:28.292 } 00:14:28.292 ] 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=595717 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:28.292 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:28.551 [2024-12-10 04:51:19.510571] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.551 Malloc4 00:14:28.551 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:28.810 [2024-12-10 04:51:19.747465] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.810 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.810 Asynchronous Event Request test 00:14:28.810 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.810 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.810 Registering asynchronous event callbacks... 00:14:28.810 Starting namespace attribute notice tests for all controllers... 00:14:28.810 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:28.810 aer_cb - Changed Namespace 00:14:28.810 Cleaning up... 00:14:29.069 [ 00:14:29.069 { 00:14:29.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.069 "subtype": "Discovery", 00:14:29.069 "listen_addresses": [], 00:14:29.069 "allow_any_host": true, 00:14:29.069 "hosts": [] 00:14:29.069 }, 00:14:29.069 { 00:14:29.069 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:29.069 "subtype": "NVMe", 00:14:29.069 "listen_addresses": [ 00:14:29.069 { 00:14:29.069 "trtype": "VFIOUSER", 00:14:29.069 "adrfam": "IPv4", 00:14:29.069 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:29.069 "trsvcid": "0" 00:14:29.069 } 00:14:29.069 ], 00:14:29.069 "allow_any_host": true, 00:14:29.069 "hosts": [], 00:14:29.069 "serial_number": "SPDK1", 00:14:29.070 "model_number": "SPDK bdev Controller", 00:14:29.070 "max_namespaces": 32, 00:14:29.070 "min_cntlid": 1, 00:14:29.070 "max_cntlid": 65519, 00:14:29.070 "namespaces": [ 00:14:29.070 { 00:14:29.070 "nsid": 1, 00:14:29.070 "bdev_name": "Malloc1", 00:14:29.070 "name": "Malloc1", 00:14:29.070 "nguid": "4389764F6A0546E5BCA093976DF3402F", 00:14:29.070 "uuid": "4389764f-6a05-46e5-bca0-93976df3402f" 00:14:29.070 }, 00:14:29.070 { 00:14:29.070 "nsid": 2, 00:14:29.070 "bdev_name": "Malloc3", 00:14:29.070 "name": "Malloc3", 00:14:29.070 "nguid": "B2B64F0C33EE419FA75175E2C4F1DF9E", 00:14:29.070 "uuid": "b2b64f0c-33ee-419f-a751-75e2c4f1df9e" 00:14:29.070 } 00:14:29.070 ] 00:14:29.070 }, 00:14:29.070 { 00:14:29.070 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:29.070 "subtype": "NVMe", 00:14:29.070 "listen_addresses": [ 00:14:29.070 { 00:14:29.070 "trtype": "VFIOUSER", 00:14:29.070 "adrfam": "IPv4", 00:14:29.070 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:29.070 "trsvcid": "0" 00:14:29.070 } 00:14:29.070 ], 00:14:29.070 "allow_any_host": true, 00:14:29.070 "hosts": [], 00:14:29.070 "serial_number": "SPDK2", 00:14:29.070 "model_number": "SPDK bdev Controller", 00:14:29.070 "max_namespaces": 32, 00:14:29.070 "min_cntlid": 1, 00:14:29.070 "max_cntlid": 65519, 00:14:29.070 "namespaces": [ 00:14:29.070 { 00:14:29.070 "nsid": 1, 00:14:29.070 "bdev_name": "Malloc2", 00:14:29.070 "name": "Malloc2", 00:14:29.070 "nguid": "E7BDF6A06B8F4E23BF0E18A60ACBCD4C", 00:14:29.070 "uuid": "e7bdf6a0-6b8f-4e23-bf0e-18a60acbcd4c" 00:14:29.070 }, 00:14:29.070 { 00:14:29.070 "nsid": 2, 00:14:29.070 "bdev_name": "Malloc4", 00:14:29.070 "name": "Malloc4", 00:14:29.070 "nguid": "8A2D6BD0FD2646499478044B1447F1F1", 00:14:29.070 "uuid": "8a2d6bd0-fd26-4649-9478-044b1447f1f1" 00:14:29.070 } 00:14:29.070 ] 00:14:29.070 } 00:14:29.070 ] 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 595717 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 587564 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 587564 ']' 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 587564 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.070 04:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587564 00:14:29.070 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.070 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.070 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587564' 00:14:29.070 killing process with pid 587564 00:14:29.070 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 587564 00:14:29.070 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 587564 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=595946 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 595946' 00:14:29.329 Process pid: 595946 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 595946 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 595946 ']' 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.329 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:29.329 [2024-12-10 04:51:20.322565] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:29.329 [2024-12-10 04:51:20.323428] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:29.329 [2024-12-10 04:51:20.323466] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.329 [2024-12-10 04:51:20.398544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.329 [2024-12-10 04:51:20.438951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.329 [2024-12-10 04:51:20.438989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.329 [2024-12-10 04:51:20.438996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.329 [2024-12-10 04:51:20.439002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.329 [2024-12-10 04:51:20.439007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.329 [2024-12-10 04:51:20.444188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.329 [2024-12-10 04:51:20.444233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.329 [2024-12-10 04:51:20.444343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.329 [2024-12-10 04:51:20.444344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.588 [2024-12-10 04:51:20.511648] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:29.588 [2024-12-10 04:51:20.512160] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:29.588 [2024-12-10 04:51:20.512257] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:29.588 [2024-12-10 04:51:20.512682] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:29.588 [2024-12-10 04:51:20.512718] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:29.588 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.588 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:29.588 04:51:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:30.526 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:30.785 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:30.785 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:30.785 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.785 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:30.785 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.045 Malloc1 00:14:31.045 04:51:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:31.304 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:31.304 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:31.563 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.563 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:31.563 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:31.822 Malloc2 00:14:31.822 04:51:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:32.081 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:32.081 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 595946 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 595946 ']' 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 595946 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595946 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595946' 00:14:32.342 killing process with pid 595946 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 595946 00:14:32.342 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 595946 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:32.601 00:14:32.601 real 0m51.459s 00:14:32.601 user 3m19.191s 00:14:32.601 sys 0m3.222s 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:32.601 ************************************ 00:14:32.601 END TEST nvmf_vfio_user 00:14:32.601 ************************************ 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.601 ************************************ 00:14:32.601 START TEST nvmf_vfio_user_nvme_compliance 00:14:32.601 ************************************ 00:14:32.601 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:32.861 * Looking for test storage... 00:14:32.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:32.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.861 --rc genhtml_branch_coverage=1 00:14:32.861 --rc genhtml_function_coverage=1 00:14:32.861 --rc genhtml_legend=1 00:14:32.861 --rc geninfo_all_blocks=1 00:14:32.861 --rc geninfo_unexecuted_blocks=1 00:14:32.861 00:14:32.861 ' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:32.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.861 --rc genhtml_branch_coverage=1 00:14:32.861 --rc genhtml_function_coverage=1 00:14:32.861 --rc genhtml_legend=1 00:14:32.861 --rc geninfo_all_blocks=1 00:14:32.861 --rc geninfo_unexecuted_blocks=1 00:14:32.861 00:14:32.861 ' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:32.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.861 --rc genhtml_branch_coverage=1 00:14:32.861 --rc genhtml_function_coverage=1 00:14:32.861 --rc genhtml_legend=1 00:14:32.861 --rc geninfo_all_blocks=1 00:14:32.861 --rc geninfo_unexecuted_blocks=1 00:14:32.861 00:14:32.861 ' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:32.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.861 --rc genhtml_branch_coverage=1 00:14:32.861 --rc genhtml_function_coverage=1 00:14:32.861 --rc genhtml_legend=1 00:14:32.861 --rc geninfo_all_blocks=1 00:14:32.861 --rc geninfo_unexecuted_blocks=1 00:14:32.861 00:14:32.861 ' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.861 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:32.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=596486 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 596486' 00:14:32.862 Process pid: 596486 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 596486 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 596486 ']' 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.862 04:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.862 [2024-12-10 04:51:23.973290] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:14:32.862 [2024-12-10 04:51:23.973337] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.121 [2024-12-10 04:51:24.043277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.121 [2024-12-10 04:51:24.086072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.121 [2024-12-10 04:51:24.086108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.121 [2024-12-10 04:51:24.086116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.121 [2024-12-10 04:51:24.086122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.121 [2024-12-10 04:51:24.086128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.121 [2024-12-10 04:51:24.087361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.121 [2024-12-10 04:51:24.087393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.121 [2024-12-10 04:51:24.087393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.121 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.121 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:33.121 04:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.499 malloc0 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.499 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:34.500 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.500 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.500 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.500 04:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:34.500 00:14:34.500 00:14:34.500 CUnit - A unit testing framework for C - Version 2.1-3 00:14:34.500 http://cunit.sourceforge.net/ 00:14:34.500 00:14:34.500 00:14:34.500 Suite: nvme_compliance 00:14:34.500 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 04:51:25.434627] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.500 [2024-12-10 04:51:25.435967] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:34.500 [2024-12-10 04:51:25.435985] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:34.500 [2024-12-10 04:51:25.435991] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:34.500 [2024-12-10 04:51:25.437645] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.500 passed 00:14:34.500 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 04:51:25.514211] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.500 [2024-12-10 04:51:25.517241] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.500 passed 00:14:34.500 Test: admin_identify_ns ...[2024-12-10 04:51:25.596036] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.759 [2024-12-10 04:51:25.656182] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:34.759 [2024-12-10 04:51:25.664181] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:34.759 [2024-12-10 04:51:25.685272] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.759 passed 00:14:34.759 Test: admin_get_features_mandatory_features ...[2024-12-10 04:51:25.760994] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.759 [2024-12-10 04:51:25.764018] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.759 passed 00:14:34.759 Test: admin_get_features_optional_features ...[2024-12-10 04:51:25.840533] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.759 [2024-12-10 04:51:25.843553] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.759 passed 00:14:35.018 Test: admin_set_features_number_of_queues ...[2024-12-10 04:51:25.920258] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.018 [2024-12-10 04:51:26.027254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.018 passed 00:14:35.018 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 04:51:26.100969] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.018 [2024-12-10 04:51:26.103995] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.018 passed 00:14:35.276 Test: admin_get_log_page_with_lpo ...[2024-12-10 04:51:26.181598] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.276 [2024-12-10 04:51:26.249184] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:35.276 [2024-12-10 04:51:26.262231] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.276 passed 00:14:35.276 Test: fabric_property_get ...[2024-12-10 04:51:26.337766] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.276 [2024-12-10 04:51:26.339023] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:35.276 [2024-12-10 04:51:26.340787] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.276 passed 00:14:35.535 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 04:51:26.419295] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.535 [2024-12-10 04:51:26.420520] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:35.535 [2024-12-10 04:51:26.422318] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.535 passed 00:14:35.535 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 04:51:26.500475] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.535 [2024-12-10 04:51:26.585171] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:35.535 [2024-12-10 04:51:26.601171] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:35.535 [2024-12-10 04:51:26.606252] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.535 passed 00:14:35.794 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 04:51:26.682017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.794 [2024-12-10 04:51:26.683272] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:35.794 [2024-12-10 04:51:26.685036] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.794 passed 00:14:35.794 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 04:51:26.760733] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.794 [2024-12-10 04:51:26.851188] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:35.794 [2024-12-10 04:51:26.875180] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:35.794 [2024-12-10 04:51:26.880256] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.794 passed 00:14:36.053 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 04:51:26.956827] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.053 [2024-12-10 04:51:26.958067] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:36.053 [2024-12-10 04:51:26.958093] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:36.053 [2024-12-10 04:51:26.959850] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.053 passed 00:14:36.053 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 04:51:27.037482] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.053 [2024-12-10 04:51:27.131175] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:36.053 [2024-12-10 04:51:27.139177] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:36.053 [2024-12-10 04:51:27.147174] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:36.053 [2024-12-10 04:51:27.155174] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:36.053 [2024-12-10 04:51:27.184262] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.312 passed 00:14:36.312 Test: admin_create_io_sq_verify_pc ...[2024-12-10 04:51:27.261018] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.312 [2024-12-10 04:51:27.277184] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:36.312 [2024-12-10 04:51:27.295213] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.312 passed 00:14:36.312 Test: admin_create_io_qp_max_qps ...[2024-12-10 04:51:27.370780] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.690 [2024-12-10 04:51:28.462178] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:37.949 [2024-12-10 04:51:28.848440] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.949 passed 00:14:37.949 Test: admin_create_io_sq_shared_cq ...[2024-12-10 04:51:28.925329] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.949 [2024-12-10 04:51:29.057174] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:38.207 [2024-12-10 04:51:29.094230] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:38.207 passed 00:14:38.207 00:14:38.208 Run Summary: Type Total Ran Passed Failed Inactive 00:14:38.208 suites 1 1 n/a 0 0 00:14:38.208 tests 18 18 18 0 0 00:14:38.208 asserts 360 360 360 0 n/a 00:14:38.208 00:14:38.208 Elapsed time = 1.503 seconds 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 596486 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 596486 ']' 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 596486 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596486 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596486' 00:14:38.208 killing process with pid 596486 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 596486 00:14:38.208 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 596486 00:14:38.466 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:38.466 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:38.466 00:14:38.466 real 0m5.657s 00:14:38.466 user 0m15.823s 00:14:38.466 sys 0m0.518s 00:14:38.466 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:38.467 ************************************ 00:14:38.467 END TEST nvmf_vfio_user_nvme_compliance 00:14:38.467 ************************************ 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.467 ************************************ 00:14:38.467 START TEST nvmf_vfio_user_fuzz 00:14:38.467 ************************************ 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:38.467 * Looking for test storage... 00:14:38.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.467 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.725 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.726 --rc genhtml_branch_coverage=1 00:14:38.726 --rc genhtml_function_coverage=1 00:14:38.726 --rc genhtml_legend=1 00:14:38.726 --rc geninfo_all_blocks=1 00:14:38.726 --rc geninfo_unexecuted_blocks=1 00:14:38.726 00:14:38.726 ' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.726 --rc genhtml_branch_coverage=1 00:14:38.726 --rc genhtml_function_coverage=1 00:14:38.726 --rc genhtml_legend=1 00:14:38.726 --rc geninfo_all_blocks=1 00:14:38.726 --rc geninfo_unexecuted_blocks=1 00:14:38.726 00:14:38.726 ' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.726 --rc genhtml_branch_coverage=1 00:14:38.726 --rc genhtml_function_coverage=1 00:14:38.726 --rc genhtml_legend=1 00:14:38.726 --rc geninfo_all_blocks=1 00:14:38.726 --rc geninfo_unexecuted_blocks=1 00:14:38.726 00:14:38.726 ' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.726 --rc genhtml_branch_coverage=1 00:14:38.726 --rc genhtml_function_coverage=1 00:14:38.726 --rc genhtml_legend=1 00:14:38.726 --rc geninfo_all_blocks=1 00:14:38.726 --rc geninfo_unexecuted_blocks=1 00:14:38.726 00:14:38.726 ' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=597528 00:14:38.726 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 597528' 00:14:38.726 Process pid: 597528 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 597528 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 597528 ']' 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.727 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:38.985 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.985 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:38.985 04:51:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.921 malloc0 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.921 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.922 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:39.922 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.922 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.922 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.922 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:39.922 04:51:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:12.018 Fuzzing completed. Shutting down the fuzz application 00:15:12.018 00:15:12.018 Dumping successful admin opcodes: 00:15:12.018 9, 10, 00:15:12.018 Dumping successful io opcodes: 00:15:12.018 0, 00:15:12.018 NS: 0x20000081ef00 I/O qp, Total commands completed: 998925, total successful commands: 3911, random_seed: 1333670784 00:15:12.018 NS: 0x20000081ef00 admin qp, Total commands completed: 246272, total successful commands: 57, random_seed: 3368525120 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 597528 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 597528 ']' 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 597528 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597528 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597528' 00:15:12.018 killing process with pid 597528 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 597528 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 597528 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:12.018 00:15:12.018 real 0m32.230s 00:15:12.018 user 0m29.629s 00:15:12.018 sys 0m31.278s 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:12.018 ************************************ 00:15:12.018 END TEST nvmf_vfio_user_fuzz 00:15:12.018 ************************************ 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.018 ************************************ 00:15:12.018 START TEST nvmf_auth_target 00:15:12.018 ************************************ 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:12.018 * Looking for test storage... 00:15:12.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.018 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:12.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.019 --rc genhtml_branch_coverage=1 00:15:12.019 --rc genhtml_function_coverage=1 00:15:12.019 --rc genhtml_legend=1 00:15:12.019 --rc geninfo_all_blocks=1 00:15:12.019 --rc geninfo_unexecuted_blocks=1 00:15:12.019 00:15:12.019 ' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:12.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.019 --rc genhtml_branch_coverage=1 00:15:12.019 --rc genhtml_function_coverage=1 00:15:12.019 --rc genhtml_legend=1 00:15:12.019 --rc geninfo_all_blocks=1 00:15:12.019 --rc geninfo_unexecuted_blocks=1 00:15:12.019 00:15:12.019 ' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:12.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.019 --rc genhtml_branch_coverage=1 00:15:12.019 --rc genhtml_function_coverage=1 00:15:12.019 --rc genhtml_legend=1 00:15:12.019 --rc geninfo_all_blocks=1 00:15:12.019 --rc geninfo_unexecuted_blocks=1 00:15:12.019 00:15:12.019 ' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:12.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.019 --rc genhtml_branch_coverage=1 00:15:12.019 --rc genhtml_function_coverage=1 00:15:12.019 --rc genhtml_legend=1 00:15:12.019 --rc geninfo_all_blocks=1 00:15:12.019 --rc geninfo_unexecuted_blocks=1 00:15:12.019 00:15:12.019 ' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.019 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.020 04:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:17.297 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:17.297 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:17.297 Found net devices under 0000:af:00.0: cvl_0_0 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.297 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:17.298 Found net devices under 0000:af:00.1: cvl_0_1 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:17.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:15:17.298 00:15:17.298 --- 10.0.0.2 ping statistics --- 00:15:17.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.298 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:15:17.298 00:15:17.298 --- 10.0.0.1 ping statistics --- 00:15:17.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.298 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.298 04:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=605773 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 605773 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 605773 ']' 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=605910 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=040cbd54516b0a43f82c555deb0d52540c41c861b68e92f7 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Svr 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 040cbd54516b0a43f82c555deb0d52540c41c861b68e92f7 0 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 040cbd54516b0a43f82c555deb0d52540c41c861b68e92f7 0 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=040cbd54516b0a43f82c555deb0d52540c41c861b68e92f7 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Svr 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Svr 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Svr 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c5a47b202c799f9974153213ea0057ced7a853489eee30ce9578b8a0436f2451 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xzU 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c5a47b202c799f9974153213ea0057ced7a853489eee30ce9578b8a0436f2451 3 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c5a47b202c799f9974153213ea0057ced7a853489eee30ce9578b8a0436f2451 3 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.298 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c5a47b202c799f9974153213ea0057ced7a853489eee30ce9578b8a0436f2451 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xzU 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xzU 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.xzU 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:17.299 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ed4acdf7dd2c52c3af98d4910b2dfe41 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IAh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ed4acdf7dd2c52c3af98d4910b2dfe41 1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ed4acdf7dd2c52c3af98d4910b2dfe41 1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ed4acdf7dd2c52c3af98d4910b2dfe41 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IAh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IAh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.IAh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c9e8d462684b14754e1543b196794029cf2984673eb5ceb 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.68j 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c9e8d462684b14754e1543b196794029cf2984673eb5ceb 2 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c9e8d462684b14754e1543b196794029cf2984673eb5ceb 2 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c9e8d462684b14754e1543b196794029cf2984673eb5ceb 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.68j 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.68j 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.68j 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=27f1d782607f0c376ab5446e38cda92665ec6d372acee882 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8HE 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 27f1d782607f0c376ab5446e38cda92665ec6d372acee882 2 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 27f1d782607f0c376ab5446e38cda92665ec6d372acee882 2 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=27f1d782607f0c376ab5446e38cda92665ec6d372acee882 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8HE 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8HE 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.8HE 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fbb09028f10b473fa74748c367318342 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6Oh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fbb09028f10b473fa74748c367318342 1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fbb09028f10b473fa74748c367318342 1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fbb09028f10b473fa74748c367318342 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6Oh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6Oh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.6Oh 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.559 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2916abd1e9b94c1bc0af736fb031cd7c2af1dba5e31ebaa30414b063173ff9e8 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VxF 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2916abd1e9b94c1bc0af736fb031cd7c2af1dba5e31ebaa30414b063173ff9e8 3 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2916abd1e9b94c1bc0af736fb031cd7c2af1dba5e31ebaa30414b063173ff9e8 3 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2916abd1e9b94c1bc0af736fb031cd7c2af1dba5e31ebaa30414b063173ff9e8 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:17.560 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VxF 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VxF 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.VxF 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 605773 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 605773 ']' 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 605910 /var/tmp/host.sock 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 605910 ']' 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:17.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.819 04:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Svr 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Svr 00:15:18.078 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Svr 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.xzU ]] 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xzU 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xzU 00:15:18.337 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xzU 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IAh 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IAh 00:15:18.597 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IAh 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.68j ]] 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.68j 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.68j 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.68j 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8HE 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8HE 00:15:18.856 04:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8HE 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.6Oh ]] 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Oh 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Oh 00:15:19.115 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Oh 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VxF 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VxF 00:15:19.374 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VxF 00:15:19.633 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:19.633 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.634 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.893 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.893 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.893 04:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.893 00:15:19.893 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.893 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.893 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.152 { 00:15:20.152 "cntlid": 1, 00:15:20.152 "qid": 0, 00:15:20.152 "state": "enabled", 00:15:20.152 "thread": "nvmf_tgt_poll_group_000", 00:15:20.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:20.152 "listen_address": { 00:15:20.152 "trtype": "TCP", 00:15:20.152 "adrfam": "IPv4", 00:15:20.152 "traddr": "10.0.0.2", 00:15:20.152 "trsvcid": "4420" 00:15:20.152 }, 00:15:20.152 "peer_address": { 00:15:20.152 "trtype": "TCP", 00:15:20.152 "adrfam": "IPv4", 00:15:20.152 "traddr": "10.0.0.1", 00:15:20.152 "trsvcid": "51788" 00:15:20.152 }, 00:15:20.152 "auth": { 00:15:20.152 "state": "completed", 00:15:20.152 "digest": "sha256", 00:15:20.152 "dhgroup": "null" 00:15:20.152 } 00:15:20.152 } 00:15:20.152 ]' 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.152 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.411 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:20.411 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.411 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.411 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.411 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.671 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:20.671 04:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.239 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.240 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.499 00:15:21.499 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.499 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.499 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.758 { 00:15:21.758 "cntlid": 3, 00:15:21.758 "qid": 0, 00:15:21.758 "state": "enabled", 00:15:21.758 "thread": "nvmf_tgt_poll_group_000", 00:15:21.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:21.758 "listen_address": { 00:15:21.758 "trtype": "TCP", 00:15:21.758 "adrfam": "IPv4", 00:15:21.758 "traddr": "10.0.0.2", 00:15:21.758 "trsvcid": "4420" 00:15:21.758 }, 00:15:21.758 "peer_address": { 00:15:21.758 "trtype": "TCP", 00:15:21.758 "adrfam": "IPv4", 00:15:21.758 "traddr": "10.0.0.1", 00:15:21.758 "trsvcid": "51816" 00:15:21.758 }, 00:15:21.758 "auth": { 00:15:21.758 "state": "completed", 00:15:21.758 "digest": "sha256", 00:15:21.758 "dhgroup": "null" 00:15:21.758 } 00:15:21.758 } 00:15:21.758 ]' 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:21.758 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.017 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.017 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.017 04:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.017 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:22.017 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:22.585 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.845 04:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.104 00:15:23.104 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.104 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.104 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.413 { 00:15:23.413 "cntlid": 5, 00:15:23.413 "qid": 0, 00:15:23.413 "state": "enabled", 00:15:23.413 "thread": "nvmf_tgt_poll_group_000", 00:15:23.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:23.413 "listen_address": { 00:15:23.413 "trtype": "TCP", 00:15:23.413 "adrfam": "IPv4", 00:15:23.413 "traddr": "10.0.0.2", 00:15:23.413 "trsvcid": "4420" 00:15:23.413 }, 00:15:23.413 "peer_address": { 00:15:23.413 "trtype": "TCP", 00:15:23.413 "adrfam": "IPv4", 00:15:23.413 "traddr": "10.0.0.1", 00:15:23.413 "trsvcid": "51852" 00:15:23.413 }, 00:15:23.413 "auth": { 00:15:23.413 "state": "completed", 00:15:23.413 "digest": "sha256", 00:15:23.413 "dhgroup": "null" 00:15:23.413 } 00:15:23.413 } 00:15:23.413 ]' 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.413 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.720 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:23.720 04:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.344 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.603 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.862 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.862 { 00:15:24.862 "cntlid": 7, 00:15:24.862 "qid": 0, 00:15:24.862 "state": "enabled", 00:15:24.862 "thread": "nvmf_tgt_poll_group_000", 00:15:24.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:24.862 "listen_address": { 00:15:24.862 "trtype": "TCP", 00:15:24.862 "adrfam": "IPv4", 00:15:24.862 "traddr": "10.0.0.2", 00:15:24.862 "trsvcid": "4420" 00:15:24.862 }, 00:15:24.862 "peer_address": { 00:15:24.862 "trtype": "TCP", 00:15:24.862 "adrfam": "IPv4", 00:15:24.862 "traddr": "10.0.0.1", 00:15:24.862 "trsvcid": "51868" 00:15:24.862 }, 00:15:24.862 "auth": { 00:15:24.862 "state": "completed", 00:15:24.862 "digest": "sha256", 00:15:24.862 "dhgroup": "null" 00:15:24.862 } 00:15:24.862 } 00:15:24.862 ]' 00:15:24.862 04:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.121 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.380 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:25.380 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.948 04:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.948 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.207 00:15:26.207 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.207 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.207 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.467 { 00:15:26.467 "cntlid": 9, 00:15:26.467 "qid": 0, 00:15:26.467 "state": "enabled", 00:15:26.467 "thread": "nvmf_tgt_poll_group_000", 00:15:26.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:26.467 "listen_address": { 00:15:26.467 "trtype": "TCP", 00:15:26.467 "adrfam": "IPv4", 00:15:26.467 "traddr": "10.0.0.2", 00:15:26.467 "trsvcid": "4420" 00:15:26.467 }, 00:15:26.467 "peer_address": { 00:15:26.467 "trtype": "TCP", 00:15:26.467 "adrfam": "IPv4", 00:15:26.467 "traddr": "10.0.0.1", 00:15:26.467 "trsvcid": "33132" 00:15:26.467 }, 00:15:26.467 "auth": { 00:15:26.467 "state": "completed", 00:15:26.467 "digest": "sha256", 00:15:26.467 "dhgroup": "ffdhe2048" 00:15:26.467 } 00:15:26.467 } 00:15:26.467 ]' 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.467 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:26.726 04:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:27.293 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.552 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:27.552 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.552 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.552 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.552 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.552 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.553 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.812 00:15:27.812 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.812 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.812 04:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.071 { 00:15:28.071 "cntlid": 11, 00:15:28.071 "qid": 0, 00:15:28.071 "state": "enabled", 00:15:28.071 "thread": "nvmf_tgt_poll_group_000", 00:15:28.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:28.071 "listen_address": { 00:15:28.071 "trtype": "TCP", 00:15:28.071 "adrfam": "IPv4", 00:15:28.071 "traddr": "10.0.0.2", 00:15:28.071 "trsvcid": "4420" 00:15:28.071 }, 00:15:28.071 "peer_address": { 00:15:28.071 "trtype": "TCP", 00:15:28.071 "adrfam": "IPv4", 00:15:28.071 "traddr": "10.0.0.1", 00:15:28.071 "trsvcid": "33158" 00:15:28.071 }, 00:15:28.071 "auth": { 00:15:28.071 "state": "completed", 00:15:28.071 "digest": "sha256", 00:15:28.071 "dhgroup": "ffdhe2048" 00:15:28.071 } 00:15:28.071 } 00:15:28.071 ]' 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.071 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.330 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.330 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.330 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.330 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:28.330 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:28.899 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.899 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:28.899 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.899 04:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.899 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.899 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.899 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:28.899 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.158 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.416 00:15:29.416 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.416 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.416 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.675 { 00:15:29.675 "cntlid": 13, 00:15:29.675 "qid": 0, 00:15:29.675 "state": "enabled", 00:15:29.675 "thread": "nvmf_tgt_poll_group_000", 00:15:29.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:29.675 "listen_address": { 00:15:29.675 "trtype": "TCP", 00:15:29.675 "adrfam": "IPv4", 00:15:29.675 "traddr": "10.0.0.2", 00:15:29.675 "trsvcid": "4420" 00:15:29.675 }, 00:15:29.675 "peer_address": { 00:15:29.675 "trtype": "TCP", 00:15:29.675 "adrfam": "IPv4", 00:15:29.675 "traddr": "10.0.0.1", 00:15:29.675 "trsvcid": "33180" 00:15:29.675 }, 00:15:29.675 "auth": { 00:15:29.675 "state": "completed", 00:15:29.675 "digest": "sha256", 00:15:29.675 "dhgroup": "ffdhe2048" 00:15:29.675 } 00:15:29.675 } 00:15:29.675 ]' 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:29.675 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.935 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.935 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.935 04:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.935 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:29.935 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:30.502 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.760 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.761 04:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.019 00:15:31.019 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.019 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.019 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.278 { 00:15:31.278 "cntlid": 15, 00:15:31.278 "qid": 0, 00:15:31.278 "state": "enabled", 00:15:31.278 "thread": "nvmf_tgt_poll_group_000", 00:15:31.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:31.278 "listen_address": { 00:15:31.278 "trtype": "TCP", 00:15:31.278 "adrfam": "IPv4", 00:15:31.278 "traddr": "10.0.0.2", 00:15:31.278 "trsvcid": "4420" 00:15:31.278 }, 00:15:31.278 "peer_address": { 00:15:31.278 "trtype": "TCP", 00:15:31.278 "adrfam": "IPv4", 00:15:31.278 "traddr": "10.0.0.1", 00:15:31.278 "trsvcid": "33206" 00:15:31.278 }, 00:15:31.278 "auth": { 00:15:31.278 "state": "completed", 00:15:31.278 "digest": "sha256", 00:15:31.278 "dhgroup": "ffdhe2048" 00:15:31.278 } 00:15:31.278 } 00:15:31.278 ]' 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.278 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.536 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:31.536 04:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.103 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.392 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.650 00:15:32.651 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.651 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.651 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.909 { 00:15:32.909 "cntlid": 17, 00:15:32.909 "qid": 0, 00:15:32.909 "state": "enabled", 00:15:32.909 "thread": "nvmf_tgt_poll_group_000", 00:15:32.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:32.909 "listen_address": { 00:15:32.909 "trtype": "TCP", 00:15:32.909 "adrfam": "IPv4", 00:15:32.909 "traddr": "10.0.0.2", 00:15:32.909 "trsvcid": "4420" 00:15:32.909 }, 00:15:32.909 "peer_address": { 00:15:32.909 "trtype": "TCP", 00:15:32.909 "adrfam": "IPv4", 00:15:32.909 "traddr": "10.0.0.1", 00:15:32.909 "trsvcid": "33244" 00:15:32.909 }, 00:15:32.909 "auth": { 00:15:32.909 "state": "completed", 00:15:32.909 "digest": "sha256", 00:15:32.909 "dhgroup": "ffdhe3072" 00:15:32.909 } 00:15:32.909 } 00:15:32.909 ]' 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.909 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.910 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.910 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.910 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.910 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.910 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.910 04:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.168 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:33.168 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.736 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.994 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.995 04:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.253 00:15:34.253 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.253 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.253 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.511 { 00:15:34.511 "cntlid": 19, 00:15:34.511 "qid": 0, 00:15:34.511 "state": "enabled", 00:15:34.511 "thread": "nvmf_tgt_poll_group_000", 00:15:34.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:34.511 "listen_address": { 00:15:34.511 "trtype": "TCP", 00:15:34.511 "adrfam": "IPv4", 00:15:34.511 "traddr": "10.0.0.2", 00:15:34.511 "trsvcid": "4420" 00:15:34.511 }, 00:15:34.511 "peer_address": { 00:15:34.511 "trtype": "TCP", 00:15:34.511 "adrfam": "IPv4", 00:15:34.511 "traddr": "10.0.0.1", 00:15:34.511 "trsvcid": "33256" 00:15:34.511 }, 00:15:34.511 "auth": { 00:15:34.511 "state": "completed", 00:15:34.511 "digest": "sha256", 00:15:34.511 "dhgroup": "ffdhe3072" 00:15:34.511 } 00:15:34.511 } 00:15:34.511 ]' 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.511 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.770 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:34.770 04:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:35.338 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.597 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.597 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.856 { 00:15:35.856 "cntlid": 21, 00:15:35.856 "qid": 0, 00:15:35.856 "state": "enabled", 00:15:35.856 "thread": "nvmf_tgt_poll_group_000", 00:15:35.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:35.856 "listen_address": { 00:15:35.856 "trtype": "TCP", 00:15:35.856 "adrfam": "IPv4", 00:15:35.856 "traddr": "10.0.0.2", 00:15:35.856 "trsvcid": "4420" 00:15:35.856 }, 00:15:35.856 "peer_address": { 00:15:35.856 "trtype": "TCP", 00:15:35.856 "adrfam": "IPv4", 00:15:35.856 "traddr": "10.0.0.1", 00:15:35.856 "trsvcid": "33292" 00:15:35.856 }, 00:15:35.856 "auth": { 00:15:35.856 "state": "completed", 00:15:35.856 "digest": "sha256", 00:15:35.856 "dhgroup": "ffdhe3072" 00:15:35.856 } 00:15:35.856 } 00:15:35.856 ]' 00:15:35.856 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.115 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.115 04:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.115 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:36.115 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.115 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.115 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.115 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.373 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:36.373 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.940 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:36.941 04:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.941 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.199 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.199 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.199 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.199 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.199 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.457 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.457 { 00:15:37.457 "cntlid": 23, 00:15:37.457 "qid": 0, 00:15:37.457 "state": "enabled", 00:15:37.457 "thread": "nvmf_tgt_poll_group_000", 00:15:37.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:37.457 "listen_address": { 00:15:37.457 "trtype": "TCP", 00:15:37.457 "adrfam": "IPv4", 00:15:37.457 "traddr": "10.0.0.2", 00:15:37.457 "trsvcid": "4420" 00:15:37.457 }, 00:15:37.457 "peer_address": { 00:15:37.457 "trtype": "TCP", 00:15:37.457 "adrfam": "IPv4", 00:15:37.457 "traddr": "10.0.0.1", 00:15:37.457 "trsvcid": "47200" 00:15:37.457 }, 00:15:37.457 "auth": { 00:15:37.457 "state": "completed", 00:15:37.458 "digest": "sha256", 00:15:37.458 "dhgroup": "ffdhe3072" 00:15:37.458 } 00:15:37.458 } 00:15:37.458 ]' 00:15:37.458 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.716 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.977 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:37.977 04:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.544 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.803 00:15:38.803 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.803 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.061 04:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.061 { 00:15:39.061 "cntlid": 25, 00:15:39.061 "qid": 0, 00:15:39.061 "state": "enabled", 00:15:39.061 "thread": "nvmf_tgt_poll_group_000", 00:15:39.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:39.061 "listen_address": { 00:15:39.061 "trtype": "TCP", 00:15:39.061 "adrfam": "IPv4", 00:15:39.061 "traddr": "10.0.0.2", 00:15:39.061 "trsvcid": "4420" 00:15:39.061 }, 00:15:39.061 "peer_address": { 00:15:39.061 "trtype": "TCP", 00:15:39.061 "adrfam": "IPv4", 00:15:39.061 "traddr": "10.0.0.1", 00:15:39.061 "trsvcid": "47218" 00:15:39.061 }, 00:15:39.061 "auth": { 00:15:39.061 "state": "completed", 00:15:39.061 "digest": "sha256", 00:15:39.061 "dhgroup": "ffdhe4096" 00:15:39.061 } 00:15:39.061 } 00:15:39.061 ]' 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.061 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.320 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.320 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.320 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.320 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.320 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.578 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:39.578 04:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.146 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.405 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.664 { 00:15:40.664 "cntlid": 27, 00:15:40.664 "qid": 0, 00:15:40.664 "state": "enabled", 00:15:40.664 "thread": "nvmf_tgt_poll_group_000", 00:15:40.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:40.664 "listen_address": { 00:15:40.664 "trtype": "TCP", 00:15:40.664 "adrfam": "IPv4", 00:15:40.664 "traddr": "10.0.0.2", 00:15:40.664 "trsvcid": "4420" 00:15:40.664 }, 00:15:40.664 "peer_address": { 00:15:40.664 "trtype": "TCP", 00:15:40.664 "adrfam": "IPv4", 00:15:40.664 "traddr": "10.0.0.1", 00:15:40.664 "trsvcid": "47244" 00:15:40.664 }, 00:15:40.664 "auth": { 00:15:40.664 "state": "completed", 00:15:40.664 "digest": "sha256", 00:15:40.664 "dhgroup": "ffdhe4096" 00:15:40.664 } 00:15:40.664 } 00:15:40.664 ]' 00:15:40.664 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.923 04:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.181 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:41.181 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.749 04:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.008 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.267 { 00:15:42.267 "cntlid": 29, 00:15:42.267 "qid": 0, 00:15:42.267 "state": "enabled", 00:15:42.267 "thread": "nvmf_tgt_poll_group_000", 00:15:42.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:42.267 "listen_address": { 00:15:42.267 "trtype": "TCP", 00:15:42.267 "adrfam": "IPv4", 00:15:42.267 "traddr": "10.0.0.2", 00:15:42.267 "trsvcid": "4420" 00:15:42.267 }, 00:15:42.267 "peer_address": { 00:15:42.267 "trtype": "TCP", 00:15:42.267 "adrfam": "IPv4", 00:15:42.267 "traddr": "10.0.0.1", 00:15:42.267 "trsvcid": "47270" 00:15:42.267 }, 00:15:42.267 "auth": { 00:15:42.267 "state": "completed", 00:15:42.267 "digest": "sha256", 00:15:42.267 "dhgroup": "ffdhe4096" 00:15:42.267 } 00:15:42.267 } 00:15:42.267 ]' 00:15:42.267 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.526 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.785 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:42.785 04:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.352 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.611 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.870 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.870 { 00:15:43.870 "cntlid": 31, 00:15:43.870 "qid": 0, 00:15:43.870 "state": "enabled", 00:15:43.870 "thread": "nvmf_tgt_poll_group_000", 00:15:43.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:43.870 "listen_address": { 00:15:43.870 "trtype": "TCP", 00:15:43.870 "adrfam": "IPv4", 00:15:43.870 "traddr": "10.0.0.2", 00:15:43.870 "trsvcid": "4420" 00:15:43.870 }, 00:15:43.870 "peer_address": { 00:15:43.870 "trtype": "TCP", 00:15:43.870 "adrfam": "IPv4", 00:15:43.870 "traddr": "10.0.0.1", 00:15:43.870 "trsvcid": "47310" 00:15:43.870 }, 00:15:43.870 "auth": { 00:15:43.870 "state": "completed", 00:15:43.870 "digest": "sha256", 00:15:43.870 "dhgroup": "ffdhe4096" 00:15:43.870 } 00:15:43.870 } 00:15:43.870 ]' 00:15:43.870 04:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.128 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.386 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:44.386 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.954 04:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.954 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.522 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.522 { 00:15:45.522 "cntlid": 33, 00:15:45.522 "qid": 0, 00:15:45.522 "state": "enabled", 00:15:45.522 "thread": "nvmf_tgt_poll_group_000", 00:15:45.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:45.522 "listen_address": { 00:15:45.522 "trtype": "TCP", 00:15:45.522 "adrfam": "IPv4", 00:15:45.522 "traddr": "10.0.0.2", 00:15:45.522 "trsvcid": "4420" 00:15:45.522 }, 00:15:45.522 "peer_address": { 00:15:45.522 "trtype": "TCP", 00:15:45.522 "adrfam": "IPv4", 00:15:45.522 "traddr": "10.0.0.1", 00:15:45.522 "trsvcid": "47348" 00:15:45.522 }, 00:15:45.522 "auth": { 00:15:45.522 "state": "completed", 00:15:45.522 "digest": "sha256", 00:15:45.522 "dhgroup": "ffdhe6144" 00:15:45.522 } 00:15:45.522 } 00:15:45.522 ]' 00:15:45.522 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.780 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.039 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:46.039 04:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.606 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.607 04:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.174 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.174 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.174 { 00:15:47.174 "cntlid": 35, 00:15:47.174 "qid": 0, 00:15:47.174 "state": "enabled", 00:15:47.174 "thread": "nvmf_tgt_poll_group_000", 00:15:47.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:47.174 "listen_address": { 00:15:47.174 "trtype": "TCP", 00:15:47.174 "adrfam": "IPv4", 00:15:47.174 "traddr": "10.0.0.2", 00:15:47.174 "trsvcid": "4420" 00:15:47.174 }, 00:15:47.174 "peer_address": { 00:15:47.174 "trtype": "TCP", 00:15:47.174 "adrfam": "IPv4", 00:15:47.174 "traddr": "10.0.0.1", 00:15:47.175 "trsvcid": "50590" 00:15:47.175 }, 00:15:47.175 "auth": { 00:15:47.175 "state": "completed", 00:15:47.175 "digest": "sha256", 00:15:47.175 "dhgroup": "ffdhe6144" 00:15:47.175 } 00:15:47.175 } 00:15:47.175 ]' 00:15:47.175 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.433 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.692 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:47.692 04:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.260 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.829 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.829 { 00:15:48.829 "cntlid": 37, 00:15:48.829 "qid": 0, 00:15:48.829 "state": "enabled", 00:15:48.829 "thread": "nvmf_tgt_poll_group_000", 00:15:48.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:48.829 "listen_address": { 00:15:48.829 "trtype": "TCP", 00:15:48.829 "adrfam": "IPv4", 00:15:48.829 "traddr": "10.0.0.2", 00:15:48.829 "trsvcid": "4420" 00:15:48.829 }, 00:15:48.829 "peer_address": { 00:15:48.829 "trtype": "TCP", 00:15:48.829 "adrfam": "IPv4", 00:15:48.829 "traddr": "10.0.0.1", 00:15:48.829 "trsvcid": "50604" 00:15:48.829 }, 00:15:48.829 "auth": { 00:15:48.829 "state": "completed", 00:15:48.829 "digest": "sha256", 00:15:48.829 "dhgroup": "ffdhe6144" 00:15:48.829 } 00:15:48.829 } 00:15:48.829 ]' 00:15:48.829 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.088 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.088 04:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.088 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.088 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.088 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.088 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.088 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.346 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:49.346 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:49.913 04:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.173 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.432 00:15:50.432 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.432 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.432 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.692 { 00:15:50.692 "cntlid": 39, 00:15:50.692 "qid": 0, 00:15:50.692 "state": "enabled", 00:15:50.692 "thread": "nvmf_tgt_poll_group_000", 00:15:50.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:50.692 "listen_address": { 00:15:50.692 "trtype": "TCP", 00:15:50.692 "adrfam": "IPv4", 00:15:50.692 "traddr": "10.0.0.2", 00:15:50.692 "trsvcid": "4420" 00:15:50.692 }, 00:15:50.692 "peer_address": { 00:15:50.692 "trtype": "TCP", 00:15:50.692 "adrfam": "IPv4", 00:15:50.692 "traddr": "10.0.0.1", 00:15:50.692 "trsvcid": "50634" 00:15:50.692 }, 00:15:50.692 "auth": { 00:15:50.692 "state": "completed", 00:15:50.692 "digest": "sha256", 00:15:50.692 "dhgroup": "ffdhe6144" 00:15:50.692 } 00:15:50.692 } 00:15:50.692 ]' 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.692 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.951 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:50.951 04:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.518 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.777 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:51.777 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.777 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.777 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:51.777 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.777 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.778 04:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.344 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.344 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.345 { 00:15:52.345 "cntlid": 41, 00:15:52.345 "qid": 0, 00:15:52.345 "state": "enabled", 00:15:52.345 "thread": "nvmf_tgt_poll_group_000", 00:15:52.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:52.345 "listen_address": { 00:15:52.345 "trtype": "TCP", 00:15:52.345 "adrfam": "IPv4", 00:15:52.345 "traddr": "10.0.0.2", 00:15:52.345 "trsvcid": "4420" 00:15:52.345 }, 00:15:52.345 "peer_address": { 00:15:52.345 "trtype": "TCP", 00:15:52.345 "adrfam": "IPv4", 00:15:52.345 "traddr": "10.0.0.1", 00:15:52.345 "trsvcid": "50668" 00:15:52.345 }, 00:15:52.345 "auth": { 00:15:52.345 "state": "completed", 00:15:52.345 "digest": "sha256", 00:15:52.345 "dhgroup": "ffdhe8192" 00:15:52.345 } 00:15:52.345 } 00:15:52.345 ]' 00:15:52.345 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.345 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.345 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.603 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.603 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.603 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.603 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.603 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.862 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:52.862 04:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.430 04:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.997 00:15:53.998 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.998 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.998 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.256 { 00:15:54.256 "cntlid": 43, 00:15:54.256 "qid": 0, 00:15:54.256 "state": "enabled", 00:15:54.256 "thread": "nvmf_tgt_poll_group_000", 00:15:54.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:54.256 "listen_address": { 00:15:54.256 "trtype": "TCP", 00:15:54.256 "adrfam": "IPv4", 00:15:54.256 "traddr": "10.0.0.2", 00:15:54.256 "trsvcid": "4420" 00:15:54.256 }, 00:15:54.256 "peer_address": { 00:15:54.256 "trtype": "TCP", 00:15:54.256 "adrfam": "IPv4", 00:15:54.256 "traddr": "10.0.0.1", 00:15:54.256 "trsvcid": "50700" 00:15:54.256 }, 00:15:54.256 "auth": { 00:15:54.256 "state": "completed", 00:15:54.256 "digest": "sha256", 00:15:54.256 "dhgroup": "ffdhe8192" 00:15:54.256 } 00:15:54.256 } 00:15:54.256 ]' 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.256 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.515 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:54.515 04:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.082 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.341 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:55.341 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.341 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.341 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.342 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.908 00:15:55.908 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.908 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.908 04:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.908 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.908 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.908 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.908 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.168 { 00:15:56.168 "cntlid": 45, 00:15:56.168 "qid": 0, 00:15:56.168 "state": "enabled", 00:15:56.168 "thread": "nvmf_tgt_poll_group_000", 00:15:56.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:56.168 "listen_address": { 00:15:56.168 "trtype": "TCP", 00:15:56.168 "adrfam": "IPv4", 00:15:56.168 "traddr": "10.0.0.2", 00:15:56.168 "trsvcid": "4420" 00:15:56.168 }, 00:15:56.168 "peer_address": { 00:15:56.168 "trtype": "TCP", 00:15:56.168 "adrfam": "IPv4", 00:15:56.168 "traddr": "10.0.0.1", 00:15:56.168 "trsvcid": "50732" 00:15:56.168 }, 00:15:56.168 "auth": { 00:15:56.168 "state": "completed", 00:15:56.168 "digest": "sha256", 00:15:56.168 "dhgroup": "ffdhe8192" 00:15:56.168 } 00:15:56.168 } 00:15:56.168 ]' 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.168 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.427 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:56.427 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:56.995 04:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.254 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.513 00:15:57.513 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.513 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.513 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.772 { 00:15:57.772 "cntlid": 47, 00:15:57.772 "qid": 0, 00:15:57.772 "state": "enabled", 00:15:57.772 "thread": "nvmf_tgt_poll_group_000", 00:15:57.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:57.772 "listen_address": { 00:15:57.772 "trtype": "TCP", 00:15:57.772 "adrfam": "IPv4", 00:15:57.772 "traddr": "10.0.0.2", 00:15:57.772 "trsvcid": "4420" 00:15:57.772 }, 00:15:57.772 "peer_address": { 00:15:57.772 "trtype": "TCP", 00:15:57.772 "adrfam": "IPv4", 00:15:57.772 "traddr": "10.0.0.1", 00:15:57.772 "trsvcid": "39314" 00:15:57.772 }, 00:15:57.772 "auth": { 00:15:57.772 "state": "completed", 00:15:57.772 "digest": "sha256", 00:15:57.772 "dhgroup": "ffdhe8192" 00:15:57.772 } 00:15:57.772 } 00:15:57.772 ]' 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.772 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.031 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.031 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.031 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.031 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.031 04:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.290 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:58.290 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:58.857 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.858 04:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.116 00:15:59.116 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.116 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.116 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.374 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.375 { 00:15:59.375 "cntlid": 49, 00:15:59.375 "qid": 0, 00:15:59.375 "state": "enabled", 00:15:59.375 "thread": "nvmf_tgt_poll_group_000", 00:15:59.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:59.375 "listen_address": { 00:15:59.375 "trtype": "TCP", 00:15:59.375 "adrfam": "IPv4", 00:15:59.375 "traddr": "10.0.0.2", 00:15:59.375 "trsvcid": "4420" 00:15:59.375 }, 00:15:59.375 "peer_address": { 00:15:59.375 "trtype": "TCP", 00:15:59.375 "adrfam": "IPv4", 00:15:59.375 "traddr": "10.0.0.1", 00:15:59.375 "trsvcid": "39336" 00:15:59.375 }, 00:15:59.375 "auth": { 00:15:59.375 "state": "completed", 00:15:59.375 "digest": "sha384", 00:15:59.375 "dhgroup": "null" 00:15:59.375 } 00:15:59.375 } 00:15:59.375 ]' 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.375 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.634 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:59.634 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.634 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.634 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.634 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.893 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:15:59.893 04:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.460 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.721 00:16:00.721 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.721 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.721 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.065 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.065 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.065 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.065 04:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.065 { 00:16:01.065 "cntlid": 51, 00:16:01.065 "qid": 0, 00:16:01.065 "state": "enabled", 00:16:01.065 "thread": "nvmf_tgt_poll_group_000", 00:16:01.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:01.065 "listen_address": { 00:16:01.065 "trtype": "TCP", 00:16:01.065 "adrfam": "IPv4", 00:16:01.065 "traddr": "10.0.0.2", 00:16:01.065 "trsvcid": "4420" 00:16:01.065 }, 00:16:01.065 "peer_address": { 00:16:01.065 "trtype": "TCP", 00:16:01.065 "adrfam": "IPv4", 00:16:01.065 "traddr": "10.0.0.1", 00:16:01.065 "trsvcid": "39344" 00:16:01.065 }, 00:16:01.065 "auth": { 00:16:01.065 "state": "completed", 00:16:01.065 "digest": "sha384", 00:16:01.065 "dhgroup": "null" 00:16:01.065 } 00:16:01.065 } 00:16:01.065 ]' 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.065 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.330 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:01.330 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.898 04:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.157 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.416 00:16:02.416 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.416 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.416 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.674 { 00:16:02.674 "cntlid": 53, 00:16:02.674 "qid": 0, 00:16:02.674 "state": "enabled", 00:16:02.674 "thread": "nvmf_tgt_poll_group_000", 00:16:02.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:02.674 "listen_address": { 00:16:02.674 "trtype": "TCP", 00:16:02.674 "adrfam": "IPv4", 00:16:02.674 "traddr": "10.0.0.2", 00:16:02.674 "trsvcid": "4420" 00:16:02.674 }, 00:16:02.674 "peer_address": { 00:16:02.674 "trtype": "TCP", 00:16:02.674 "adrfam": "IPv4", 00:16:02.674 "traddr": "10.0.0.1", 00:16:02.674 "trsvcid": "39360" 00:16:02.674 }, 00:16:02.674 "auth": { 00:16:02.674 "state": "completed", 00:16:02.674 "digest": "sha384", 00:16:02.674 "dhgroup": "null" 00:16:02.674 } 00:16:02.674 } 00:16:02.674 ]' 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.674 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.933 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:02.933 04:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.500 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.759 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.018 00:16:04.018 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.018 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.018 04:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.276 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.276 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.276 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.276 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.276 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.276 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.276 { 00:16:04.276 "cntlid": 55, 00:16:04.276 "qid": 0, 00:16:04.276 "state": "enabled", 00:16:04.276 "thread": "nvmf_tgt_poll_group_000", 00:16:04.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:04.276 "listen_address": { 00:16:04.276 "trtype": "TCP", 00:16:04.277 "adrfam": "IPv4", 00:16:04.277 "traddr": "10.0.0.2", 00:16:04.277 "trsvcid": "4420" 00:16:04.277 }, 00:16:04.277 "peer_address": { 00:16:04.277 "trtype": "TCP", 00:16:04.277 "adrfam": "IPv4", 00:16:04.277 "traddr": "10.0.0.1", 00:16:04.277 "trsvcid": "39392" 00:16:04.277 }, 00:16:04.277 "auth": { 00:16:04.277 "state": "completed", 00:16:04.277 "digest": "sha384", 00:16:04.277 "dhgroup": "null" 00:16:04.277 } 00:16:04.277 } 00:16:04.277 ]' 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.277 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.535 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:04.535 04:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.102 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.361 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.620 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.620 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.879 { 00:16:05.879 "cntlid": 57, 00:16:05.879 "qid": 0, 00:16:05.879 "state": "enabled", 00:16:05.879 "thread": "nvmf_tgt_poll_group_000", 00:16:05.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:05.879 "listen_address": { 00:16:05.879 "trtype": "TCP", 00:16:05.879 "adrfam": "IPv4", 00:16:05.879 "traddr": "10.0.0.2", 00:16:05.879 "trsvcid": "4420" 00:16:05.879 }, 00:16:05.879 "peer_address": { 00:16:05.879 "trtype": "TCP", 00:16:05.879 "adrfam": "IPv4", 00:16:05.879 "traddr": "10.0.0.1", 00:16:05.879 "trsvcid": "39420" 00:16:05.879 }, 00:16:05.879 "auth": { 00:16:05.879 "state": "completed", 00:16:05.879 "digest": "sha384", 00:16:05.879 "dhgroup": "ffdhe2048" 00:16:05.879 } 00:16:05.879 } 00:16:05.879 ]' 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.879 04:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.138 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:06.138 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.705 04:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.964 00:16:06.964 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.964 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.964 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.223 { 00:16:07.223 "cntlid": 59, 00:16:07.223 "qid": 0, 00:16:07.223 "state": "enabled", 00:16:07.223 "thread": "nvmf_tgt_poll_group_000", 00:16:07.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:07.223 "listen_address": { 00:16:07.223 "trtype": "TCP", 00:16:07.223 "adrfam": "IPv4", 00:16:07.223 "traddr": "10.0.0.2", 00:16:07.223 "trsvcid": "4420" 00:16:07.223 }, 00:16:07.223 "peer_address": { 00:16:07.223 "trtype": "TCP", 00:16:07.223 "adrfam": "IPv4", 00:16:07.223 "traddr": "10.0.0.1", 00:16:07.223 "trsvcid": "57994" 00:16:07.223 }, 00:16:07.223 "auth": { 00:16:07.223 "state": "completed", 00:16:07.223 "digest": "sha384", 00:16:07.223 "dhgroup": "ffdhe2048" 00:16:07.223 } 00:16:07.223 } 00:16:07.223 ]' 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.223 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.482 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.482 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.482 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.482 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.482 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.741 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:07.741 04:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.309 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.568 00:16:08.568 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.568 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.568 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.827 { 00:16:08.827 "cntlid": 61, 00:16:08.827 "qid": 0, 00:16:08.827 "state": "enabled", 00:16:08.827 "thread": "nvmf_tgt_poll_group_000", 00:16:08.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:08.827 "listen_address": { 00:16:08.827 "trtype": "TCP", 00:16:08.827 "adrfam": "IPv4", 00:16:08.827 "traddr": "10.0.0.2", 00:16:08.827 "trsvcid": "4420" 00:16:08.827 }, 00:16:08.827 "peer_address": { 00:16:08.827 "trtype": "TCP", 00:16:08.827 "adrfam": "IPv4", 00:16:08.827 "traddr": "10.0.0.1", 00:16:08.827 "trsvcid": "58020" 00:16:08.827 }, 00:16:08.827 "auth": { 00:16:08.827 "state": "completed", 00:16:08.827 "digest": "sha384", 00:16:08.827 "dhgroup": "ffdhe2048" 00:16:08.827 } 00:16:08.827 } 00:16:08.827 ]' 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.827 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.086 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.086 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.086 04:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.086 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:09.086 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:09.652 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.910 04:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.168 00:16:10.168 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.168 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.168 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.426 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.426 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.427 { 00:16:10.427 "cntlid": 63, 00:16:10.427 "qid": 0, 00:16:10.427 "state": "enabled", 00:16:10.427 "thread": "nvmf_tgt_poll_group_000", 00:16:10.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:10.427 "listen_address": { 00:16:10.427 "trtype": "TCP", 00:16:10.427 "adrfam": "IPv4", 00:16:10.427 "traddr": "10.0.0.2", 00:16:10.427 "trsvcid": "4420" 00:16:10.427 }, 00:16:10.427 "peer_address": { 00:16:10.427 "trtype": "TCP", 00:16:10.427 "adrfam": "IPv4", 00:16:10.427 "traddr": "10.0.0.1", 00:16:10.427 "trsvcid": "58060" 00:16:10.427 }, 00:16:10.427 "auth": { 00:16:10.427 "state": "completed", 00:16:10.427 "digest": "sha384", 00:16:10.427 "dhgroup": "ffdhe2048" 00:16:10.427 } 00:16:10.427 } 00:16:10.427 ]' 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.427 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.685 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:10.685 04:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.252 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.511 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.770 00:16:11.770 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.770 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.770 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.029 { 00:16:12.029 "cntlid": 65, 00:16:12.029 "qid": 0, 00:16:12.029 "state": "enabled", 00:16:12.029 "thread": "nvmf_tgt_poll_group_000", 00:16:12.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:12.029 "listen_address": { 00:16:12.029 "trtype": "TCP", 00:16:12.029 "adrfam": "IPv4", 00:16:12.029 "traddr": "10.0.0.2", 00:16:12.029 "trsvcid": "4420" 00:16:12.029 }, 00:16:12.029 "peer_address": { 00:16:12.029 "trtype": "TCP", 00:16:12.029 "adrfam": "IPv4", 00:16:12.029 "traddr": "10.0.0.1", 00:16:12.029 "trsvcid": "58084" 00:16:12.029 }, 00:16:12.029 "auth": { 00:16:12.029 "state": "completed", 00:16:12.029 "digest": "sha384", 00:16:12.029 "dhgroup": "ffdhe3072" 00:16:12.029 } 00:16:12.029 } 00:16:12.029 ]' 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.029 04:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.029 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.029 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.029 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.287 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:12.287 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:12.854 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.854 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:12.854 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.854 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.855 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.855 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.855 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.855 04:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.113 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.372 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.372 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.630 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.630 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.630 { 00:16:13.630 "cntlid": 67, 00:16:13.630 "qid": 0, 00:16:13.630 "state": "enabled", 00:16:13.630 "thread": "nvmf_tgt_poll_group_000", 00:16:13.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:13.630 "listen_address": { 00:16:13.630 "trtype": "TCP", 00:16:13.630 "adrfam": "IPv4", 00:16:13.630 "traddr": "10.0.0.2", 00:16:13.630 "trsvcid": "4420" 00:16:13.630 }, 00:16:13.630 "peer_address": { 00:16:13.630 "trtype": "TCP", 00:16:13.630 "adrfam": "IPv4", 00:16:13.630 "traddr": "10.0.0.1", 00:16:13.631 "trsvcid": "58124" 00:16:13.631 }, 00:16:13.631 "auth": { 00:16:13.631 "state": "completed", 00:16:13.631 "digest": "sha384", 00:16:13.631 "dhgroup": "ffdhe3072" 00:16:13.631 } 00:16:13.631 } 00:16:13.631 ]' 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.631 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.889 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:13.889 04:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.457 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.715 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.974 00:16:14.974 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.974 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.974 04:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.233 { 00:16:15.233 "cntlid": 69, 00:16:15.233 "qid": 0, 00:16:15.233 "state": "enabled", 00:16:15.233 "thread": "nvmf_tgt_poll_group_000", 00:16:15.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:15.233 "listen_address": { 00:16:15.233 "trtype": "TCP", 00:16:15.233 "adrfam": "IPv4", 00:16:15.233 "traddr": "10.0.0.2", 00:16:15.233 "trsvcid": "4420" 00:16:15.233 }, 00:16:15.233 "peer_address": { 00:16:15.233 "trtype": "TCP", 00:16:15.233 "adrfam": "IPv4", 00:16:15.233 "traddr": "10.0.0.1", 00:16:15.233 "trsvcid": "58154" 00:16:15.233 }, 00:16:15.233 "auth": { 00:16:15.233 "state": "completed", 00:16:15.233 "digest": "sha384", 00:16:15.233 "dhgroup": "ffdhe3072" 00:16:15.233 } 00:16:15.233 } 00:16:15.233 ]' 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.233 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.491 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:15.491 04:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.058 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.317 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.576 00:16:16.576 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.576 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.576 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.576 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.834 { 00:16:16.834 "cntlid": 71, 00:16:16.834 "qid": 0, 00:16:16.834 "state": "enabled", 00:16:16.834 "thread": "nvmf_tgt_poll_group_000", 00:16:16.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:16.834 "listen_address": { 00:16:16.834 "trtype": "TCP", 00:16:16.834 "adrfam": "IPv4", 00:16:16.834 "traddr": "10.0.0.2", 00:16:16.834 "trsvcid": "4420" 00:16:16.834 }, 00:16:16.834 "peer_address": { 00:16:16.834 "trtype": "TCP", 00:16:16.834 "adrfam": "IPv4", 00:16:16.834 "traddr": "10.0.0.1", 00:16:16.834 "trsvcid": "40880" 00:16:16.834 }, 00:16:16.834 "auth": { 00:16:16.834 "state": "completed", 00:16:16.834 "digest": "sha384", 00:16:16.834 "dhgroup": "ffdhe3072" 00:16:16.834 } 00:16:16.834 } 00:16:16.834 ]' 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.834 04:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.093 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:17.093 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.661 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.920 04:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.179 00:16:18.179 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.179 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.179 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.179 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.179 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.179 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.437 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.438 { 00:16:18.438 "cntlid": 73, 00:16:18.438 "qid": 0, 00:16:18.438 "state": "enabled", 00:16:18.438 "thread": "nvmf_tgt_poll_group_000", 00:16:18.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:18.438 "listen_address": { 00:16:18.438 "trtype": "TCP", 00:16:18.438 "adrfam": "IPv4", 00:16:18.438 "traddr": "10.0.0.2", 00:16:18.438 "trsvcid": "4420" 00:16:18.438 }, 00:16:18.438 "peer_address": { 00:16:18.438 "trtype": "TCP", 00:16:18.438 "adrfam": "IPv4", 00:16:18.438 "traddr": "10.0.0.1", 00:16:18.438 "trsvcid": "40890" 00:16:18.438 }, 00:16:18.438 "auth": { 00:16:18.438 "state": "completed", 00:16:18.438 "digest": "sha384", 00:16:18.438 "dhgroup": "ffdhe4096" 00:16:18.438 } 00:16:18.438 } 00:16:18.438 ]' 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.438 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.696 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:18.696 04:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.264 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.522 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.780 00:16:19.780 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.780 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.780 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.038 { 00:16:20.038 "cntlid": 75, 00:16:20.038 "qid": 0, 00:16:20.038 "state": "enabled", 00:16:20.038 "thread": "nvmf_tgt_poll_group_000", 00:16:20.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:20.038 "listen_address": { 00:16:20.038 "trtype": "TCP", 00:16:20.038 "adrfam": "IPv4", 00:16:20.038 "traddr": "10.0.0.2", 00:16:20.038 "trsvcid": "4420" 00:16:20.038 }, 00:16:20.038 "peer_address": { 00:16:20.038 "trtype": "TCP", 00:16:20.038 "adrfam": "IPv4", 00:16:20.038 "traddr": "10.0.0.1", 00:16:20.038 "trsvcid": "40914" 00:16:20.038 }, 00:16:20.038 "auth": { 00:16:20.038 "state": "completed", 00:16:20.038 "digest": "sha384", 00:16:20.038 "dhgroup": "ffdhe4096" 00:16:20.038 } 00:16:20.038 } 00:16:20.038 ]' 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.038 04:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.038 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.039 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.039 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.039 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.039 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.296 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:20.296 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.861 04:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.120 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.378 00:16:21.378 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.378 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.378 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.637 { 00:16:21.637 "cntlid": 77, 00:16:21.637 "qid": 0, 00:16:21.637 "state": "enabled", 00:16:21.637 "thread": "nvmf_tgt_poll_group_000", 00:16:21.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:21.637 "listen_address": { 00:16:21.637 "trtype": "TCP", 00:16:21.637 "adrfam": "IPv4", 00:16:21.637 "traddr": "10.0.0.2", 00:16:21.637 "trsvcid": "4420" 00:16:21.637 }, 00:16:21.637 "peer_address": { 00:16:21.637 "trtype": "TCP", 00:16:21.637 "adrfam": "IPv4", 00:16:21.637 "traddr": "10.0.0.1", 00:16:21.637 "trsvcid": "40958" 00:16:21.637 }, 00:16:21.637 "auth": { 00:16:21.637 "state": "completed", 00:16:21.637 "digest": "sha384", 00:16:21.637 "dhgroup": "ffdhe4096" 00:16:21.637 } 00:16:21.637 } 00:16:21.637 ]' 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.637 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.897 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:21.897 04:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.464 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.722 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.980 00:16:22.980 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.980 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.980 04:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.239 { 00:16:23.239 "cntlid": 79, 00:16:23.239 "qid": 0, 00:16:23.239 "state": "enabled", 00:16:23.239 "thread": "nvmf_tgt_poll_group_000", 00:16:23.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:23.239 "listen_address": { 00:16:23.239 "trtype": "TCP", 00:16:23.239 "adrfam": "IPv4", 00:16:23.239 "traddr": "10.0.0.2", 00:16:23.239 "trsvcid": "4420" 00:16:23.239 }, 00:16:23.239 "peer_address": { 00:16:23.239 "trtype": "TCP", 00:16:23.239 "adrfam": "IPv4", 00:16:23.239 "traddr": "10.0.0.1", 00:16:23.239 "trsvcid": "40978" 00:16:23.239 }, 00:16:23.239 "auth": { 00:16:23.239 "state": "completed", 00:16:23.239 "digest": "sha384", 00:16:23.239 "dhgroup": "ffdhe4096" 00:16:23.239 } 00:16:23.239 } 00:16:23.239 ]' 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.239 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.497 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:23.497 04:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:24.063 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.063 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.063 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.063 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.064 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.064 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.064 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.064 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.064 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.321 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.578 00:16:24.578 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.578 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.578 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.836 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.836 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.837 { 00:16:24.837 "cntlid": 81, 00:16:24.837 "qid": 0, 00:16:24.837 "state": "enabled", 00:16:24.837 "thread": "nvmf_tgt_poll_group_000", 00:16:24.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:24.837 "listen_address": { 00:16:24.837 "trtype": "TCP", 00:16:24.837 "adrfam": "IPv4", 00:16:24.837 "traddr": "10.0.0.2", 00:16:24.837 "trsvcid": "4420" 00:16:24.837 }, 00:16:24.837 "peer_address": { 00:16:24.837 "trtype": "TCP", 00:16:24.837 "adrfam": "IPv4", 00:16:24.837 "traddr": "10.0.0.1", 00:16:24.837 "trsvcid": "41010" 00:16:24.837 }, 00:16:24.837 "auth": { 00:16:24.837 "state": "completed", 00:16:24.837 "digest": "sha384", 00:16:24.837 "dhgroup": "ffdhe6144" 00:16:24.837 } 00:16:24.837 } 00:16:24.837 ]' 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.837 04:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.095 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:25.095 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.660 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.918 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:25.918 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.918 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.918 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.918 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.918 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 04:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.177 00:16:26.177 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.177 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.177 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.435 { 00:16:26.435 "cntlid": 83, 00:16:26.435 "qid": 0, 00:16:26.435 "state": "enabled", 00:16:26.435 "thread": "nvmf_tgt_poll_group_000", 00:16:26.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:26.435 "listen_address": { 00:16:26.435 "trtype": "TCP", 00:16:26.435 "adrfam": "IPv4", 00:16:26.435 "traddr": "10.0.0.2", 00:16:26.435 "trsvcid": "4420" 00:16:26.435 }, 00:16:26.435 "peer_address": { 00:16:26.435 "trtype": "TCP", 00:16:26.435 "adrfam": "IPv4", 00:16:26.435 "traddr": "10.0.0.1", 00:16:26.435 "trsvcid": "34254" 00:16:26.435 }, 00:16:26.435 "auth": { 00:16:26.435 "state": "completed", 00:16:26.435 "digest": "sha384", 00:16:26.435 "dhgroup": "ffdhe6144" 00:16:26.435 } 00:16:26.435 } 00:16:26.435 ]' 00:16:26.435 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.694 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.952 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:26.952 04:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.518 04:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.084 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.084 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.341 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.341 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.341 { 00:16:28.341 "cntlid": 85, 00:16:28.341 "qid": 0, 00:16:28.341 "state": "enabled", 00:16:28.341 "thread": "nvmf_tgt_poll_group_000", 00:16:28.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:28.341 "listen_address": { 00:16:28.341 "trtype": "TCP", 00:16:28.341 "adrfam": "IPv4", 00:16:28.341 "traddr": "10.0.0.2", 00:16:28.341 "trsvcid": "4420" 00:16:28.341 }, 00:16:28.341 "peer_address": { 00:16:28.341 "trtype": "TCP", 00:16:28.341 "adrfam": "IPv4", 00:16:28.342 "traddr": "10.0.0.1", 00:16:28.342 "trsvcid": "34282" 00:16:28.342 }, 00:16:28.342 "auth": { 00:16:28.342 "state": "completed", 00:16:28.342 "digest": "sha384", 00:16:28.342 "dhgroup": "ffdhe6144" 00:16:28.342 } 00:16:28.342 } 00:16:28.342 ]' 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.342 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.599 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:28.599 04:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.165 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.422 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.423 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.423 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.423 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.423 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.681 00:16:29.681 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.681 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.681 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.939 { 00:16:29.939 "cntlid": 87, 00:16:29.939 "qid": 0, 00:16:29.939 "state": "enabled", 00:16:29.939 "thread": "nvmf_tgt_poll_group_000", 00:16:29.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:29.939 "listen_address": { 00:16:29.939 "trtype": "TCP", 00:16:29.939 "adrfam": "IPv4", 00:16:29.939 "traddr": "10.0.0.2", 00:16:29.939 "trsvcid": "4420" 00:16:29.939 }, 00:16:29.939 "peer_address": { 00:16:29.939 "trtype": "TCP", 00:16:29.939 "adrfam": "IPv4", 00:16:29.939 "traddr": "10.0.0.1", 00:16:29.939 "trsvcid": "34316" 00:16:29.939 }, 00:16:29.939 "auth": { 00:16:29.939 "state": "completed", 00:16:29.939 "digest": "sha384", 00:16:29.939 "dhgroup": "ffdhe6144" 00:16:29.939 } 00:16:29.939 } 00:16:29.939 ]' 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.939 04:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.939 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.939 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.940 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.198 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:30.198 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.764 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.023 04:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.590 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.590 { 00:16:31.590 "cntlid": 89, 00:16:31.590 "qid": 0, 00:16:31.590 "state": "enabled", 00:16:31.590 "thread": "nvmf_tgt_poll_group_000", 00:16:31.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:31.590 "listen_address": { 00:16:31.590 "trtype": "TCP", 00:16:31.590 "adrfam": "IPv4", 00:16:31.590 "traddr": "10.0.0.2", 00:16:31.590 "trsvcid": "4420" 00:16:31.590 }, 00:16:31.590 "peer_address": { 00:16:31.590 "trtype": "TCP", 00:16:31.590 "adrfam": "IPv4", 00:16:31.590 "traddr": "10.0.0.1", 00:16:31.590 "trsvcid": "34346" 00:16:31.590 }, 00:16:31.590 "auth": { 00:16:31.590 "state": "completed", 00:16:31.590 "digest": "sha384", 00:16:31.590 "dhgroup": "ffdhe8192" 00:16:31.590 } 00:16:31.590 } 00:16:31.590 ]' 00:16:31.590 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.848 04:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.107 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:32.107 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.673 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.931 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.931 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.931 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.931 04:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.189 00:16:33.189 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.189 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.189 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.448 { 00:16:33.448 "cntlid": 91, 00:16:33.448 "qid": 0, 00:16:33.448 "state": "enabled", 00:16:33.448 "thread": "nvmf_tgt_poll_group_000", 00:16:33.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:33.448 "listen_address": { 00:16:33.448 "trtype": "TCP", 00:16:33.448 "adrfam": "IPv4", 00:16:33.448 "traddr": "10.0.0.2", 00:16:33.448 "trsvcid": "4420" 00:16:33.448 }, 00:16:33.448 "peer_address": { 00:16:33.448 "trtype": "TCP", 00:16:33.448 "adrfam": "IPv4", 00:16:33.448 "traddr": "10.0.0.1", 00:16:33.448 "trsvcid": "34370" 00:16:33.448 }, 00:16:33.448 "auth": { 00:16:33.448 "state": "completed", 00:16:33.448 "digest": "sha384", 00:16:33.448 "dhgroup": "ffdhe8192" 00:16:33.448 } 00:16:33.448 } 00:16:33.448 ]' 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.448 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.705 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.705 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.705 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.706 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:33.706 04:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:34.270 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.271 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.528 04:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.093 00:16:35.093 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.093 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.093 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.352 { 00:16:35.352 "cntlid": 93, 00:16:35.352 "qid": 0, 00:16:35.352 "state": "enabled", 00:16:35.352 "thread": "nvmf_tgt_poll_group_000", 00:16:35.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:35.352 "listen_address": { 00:16:35.352 "trtype": "TCP", 00:16:35.352 "adrfam": "IPv4", 00:16:35.352 "traddr": "10.0.0.2", 00:16:35.352 "trsvcid": "4420" 00:16:35.352 }, 00:16:35.352 "peer_address": { 00:16:35.352 "trtype": "TCP", 00:16:35.352 "adrfam": "IPv4", 00:16:35.352 "traddr": "10.0.0.1", 00:16:35.352 "trsvcid": "34402" 00:16:35.352 }, 00:16:35.352 "auth": { 00:16:35.352 "state": "completed", 00:16:35.352 "digest": "sha384", 00:16:35.352 "dhgroup": "ffdhe8192" 00:16:35.352 } 00:16:35.352 } 00:16:35.352 ]' 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.352 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.610 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:35.610 04:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.176 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.435 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.001 00:16:37.001 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.001 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.001 04:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.001 { 00:16:37.001 "cntlid": 95, 00:16:37.001 "qid": 0, 00:16:37.001 "state": "enabled", 00:16:37.001 "thread": "nvmf_tgt_poll_group_000", 00:16:37.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:37.001 "listen_address": { 00:16:37.001 "trtype": "TCP", 00:16:37.001 "adrfam": "IPv4", 00:16:37.001 "traddr": "10.0.0.2", 00:16:37.001 "trsvcid": "4420" 00:16:37.001 }, 00:16:37.001 "peer_address": { 00:16:37.001 "trtype": "TCP", 00:16:37.001 "adrfam": "IPv4", 00:16:37.001 "traddr": "10.0.0.1", 00:16:37.001 "trsvcid": "55742" 00:16:37.001 }, 00:16:37.001 "auth": { 00:16:37.001 "state": "completed", 00:16:37.001 "digest": "sha384", 00:16:37.001 "dhgroup": "ffdhe8192" 00:16:37.001 } 00:16:37.001 } 00:16:37.001 ]' 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.001 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.260 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.260 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.260 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.260 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.260 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.519 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:37.519 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.085 04:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.085 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.358 00:16:38.358 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.358 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.358 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.703 { 00:16:38.703 "cntlid": 97, 00:16:38.703 "qid": 0, 00:16:38.703 "state": "enabled", 00:16:38.703 "thread": "nvmf_tgt_poll_group_000", 00:16:38.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:38.703 "listen_address": { 00:16:38.703 "trtype": "TCP", 00:16:38.703 "adrfam": "IPv4", 00:16:38.703 "traddr": "10.0.0.2", 00:16:38.703 "trsvcid": "4420" 00:16:38.703 }, 00:16:38.703 "peer_address": { 00:16:38.703 "trtype": "TCP", 00:16:38.703 "adrfam": "IPv4", 00:16:38.703 "traddr": "10.0.0.1", 00:16:38.703 "trsvcid": "55772" 00:16:38.703 }, 00:16:38.703 "auth": { 00:16:38.703 "state": "completed", 00:16:38.703 "digest": "sha512", 00:16:38.703 "dhgroup": "null" 00:16:38.703 } 00:16:38.703 } 00:16:38.703 ]' 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.703 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.984 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:38.984 04:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:39.550 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.809 04:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.067 00:16:40.067 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.067 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.067 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.326 { 00:16:40.326 "cntlid": 99, 00:16:40.326 "qid": 0, 00:16:40.326 "state": "enabled", 00:16:40.326 "thread": "nvmf_tgt_poll_group_000", 00:16:40.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:40.326 "listen_address": { 00:16:40.326 "trtype": "TCP", 00:16:40.326 "adrfam": "IPv4", 00:16:40.326 "traddr": "10.0.0.2", 00:16:40.326 "trsvcid": "4420" 00:16:40.326 }, 00:16:40.326 "peer_address": { 00:16:40.326 "trtype": "TCP", 00:16:40.326 "adrfam": "IPv4", 00:16:40.326 "traddr": "10.0.0.1", 00:16:40.326 "trsvcid": "55802" 00:16:40.326 }, 00:16:40.326 "auth": { 00:16:40.326 "state": "completed", 00:16:40.326 "digest": "sha512", 00:16:40.326 "dhgroup": "null" 00:16:40.326 } 00:16:40.326 } 00:16:40.326 ]' 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.326 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.584 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:40.584 04:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.150 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.408 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.666 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.666 { 00:16:41.666 "cntlid": 101, 00:16:41.666 "qid": 0, 00:16:41.666 "state": "enabled", 00:16:41.666 "thread": "nvmf_tgt_poll_group_000", 00:16:41.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:41.666 "listen_address": { 00:16:41.666 "trtype": "TCP", 00:16:41.666 "adrfam": "IPv4", 00:16:41.666 "traddr": "10.0.0.2", 00:16:41.666 "trsvcid": "4420" 00:16:41.666 }, 00:16:41.666 "peer_address": { 00:16:41.666 "trtype": "TCP", 00:16:41.666 "adrfam": "IPv4", 00:16:41.666 "traddr": "10.0.0.1", 00:16:41.666 "trsvcid": "55824" 00:16:41.666 }, 00:16:41.666 "auth": { 00:16:41.666 "state": "completed", 00:16:41.666 "digest": "sha512", 00:16:41.666 "dhgroup": "null" 00:16:41.666 } 00:16:41.666 } 00:16:41.666 ]' 00:16:41.666 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.924 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.924 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.924 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.925 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.925 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.925 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.925 04:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.183 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:42.183 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:42.749 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.007 04:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.266 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.266 { 00:16:43.266 "cntlid": 103, 00:16:43.266 "qid": 0, 00:16:43.266 "state": "enabled", 00:16:43.266 "thread": "nvmf_tgt_poll_group_000", 00:16:43.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:43.266 "listen_address": { 00:16:43.266 "trtype": "TCP", 00:16:43.266 "adrfam": "IPv4", 00:16:43.266 "traddr": "10.0.0.2", 00:16:43.266 "trsvcid": "4420" 00:16:43.266 }, 00:16:43.266 "peer_address": { 00:16:43.266 "trtype": "TCP", 00:16:43.266 "adrfam": "IPv4", 00:16:43.266 "traddr": "10.0.0.1", 00:16:43.266 "trsvcid": "55852" 00:16:43.266 }, 00:16:43.266 "auth": { 00:16:43.266 "state": "completed", 00:16:43.266 "digest": "sha512", 00:16:43.266 "dhgroup": "null" 00:16:43.266 } 00:16:43.266 } 00:16:43.266 ]' 00:16:43.266 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.524 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.782 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:43.782 04:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.347 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.605 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.863 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.863 { 00:16:44.863 "cntlid": 105, 00:16:44.863 "qid": 0, 00:16:44.863 "state": "enabled", 00:16:44.863 "thread": "nvmf_tgt_poll_group_000", 00:16:44.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:44.863 "listen_address": { 00:16:44.863 "trtype": "TCP", 00:16:44.863 "adrfam": "IPv4", 00:16:44.863 "traddr": "10.0.0.2", 00:16:44.863 "trsvcid": "4420" 00:16:44.863 }, 00:16:44.863 "peer_address": { 00:16:44.863 "trtype": "TCP", 00:16:44.863 "adrfam": "IPv4", 00:16:44.863 "traddr": "10.0.0.1", 00:16:44.863 "trsvcid": "55890" 00:16:44.863 }, 00:16:44.863 "auth": { 00:16:44.863 "state": "completed", 00:16:44.863 "digest": "sha512", 00:16:44.863 "dhgroup": "ffdhe2048" 00:16:44.863 } 00:16:44.863 } 00:16:44.863 ]' 00:16:44.863 04:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.121 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.379 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:45.379 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.944 04:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.944 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.945 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.203 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.203 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.203 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.203 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.203 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.461 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.462 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.462 { 00:16:46.462 "cntlid": 107, 00:16:46.462 "qid": 0, 00:16:46.462 "state": "enabled", 00:16:46.462 "thread": "nvmf_tgt_poll_group_000", 00:16:46.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:46.462 "listen_address": { 00:16:46.462 "trtype": "TCP", 00:16:46.462 "adrfam": "IPv4", 00:16:46.462 "traddr": "10.0.0.2", 00:16:46.462 "trsvcid": "4420" 00:16:46.462 }, 00:16:46.462 "peer_address": { 00:16:46.462 "trtype": "TCP", 00:16:46.462 "adrfam": "IPv4", 00:16:46.462 "traddr": "10.0.0.1", 00:16:46.462 "trsvcid": "60098" 00:16:46.462 }, 00:16:46.462 "auth": { 00:16:46.462 "state": "completed", 00:16:46.462 "digest": "sha512", 00:16:46.462 "dhgroup": "ffdhe2048" 00:16:46.462 } 00:16:46.462 } 00:16:46.462 ]' 00:16:46.462 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.462 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.720 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.720 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.720 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.720 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.720 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.720 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.979 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:46.979 04:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.546 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.804 00:16:47.804 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.804 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.804 04:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.063 { 00:16:48.063 "cntlid": 109, 00:16:48.063 "qid": 0, 00:16:48.063 "state": "enabled", 00:16:48.063 "thread": "nvmf_tgt_poll_group_000", 00:16:48.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:48.063 "listen_address": { 00:16:48.063 "trtype": "TCP", 00:16:48.063 "adrfam": "IPv4", 00:16:48.063 "traddr": "10.0.0.2", 00:16:48.063 "trsvcid": "4420" 00:16:48.063 }, 00:16:48.063 "peer_address": { 00:16:48.063 "trtype": "TCP", 00:16:48.063 "adrfam": "IPv4", 00:16:48.063 "traddr": "10.0.0.1", 00:16:48.063 "trsvcid": "60108" 00:16:48.063 }, 00:16:48.063 "auth": { 00:16:48.063 "state": "completed", 00:16:48.063 "digest": "sha512", 00:16:48.063 "dhgroup": "ffdhe2048" 00:16:48.063 } 00:16:48.063 } 00:16:48.063 ]' 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.063 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.321 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.321 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.321 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.321 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.321 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.579 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:48.579 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:49.146 04:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.146 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.404 00:16:49.404 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.404 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.404 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.662 { 00:16:49.662 "cntlid": 111, 00:16:49.662 "qid": 0, 00:16:49.662 "state": "enabled", 00:16:49.662 "thread": "nvmf_tgt_poll_group_000", 00:16:49.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:49.662 "listen_address": { 00:16:49.662 "trtype": "TCP", 00:16:49.662 "adrfam": "IPv4", 00:16:49.662 "traddr": "10.0.0.2", 00:16:49.662 "trsvcid": "4420" 00:16:49.662 }, 00:16:49.662 "peer_address": { 00:16:49.662 "trtype": "TCP", 00:16:49.662 "adrfam": "IPv4", 00:16:49.662 "traddr": "10.0.0.1", 00:16:49.662 "trsvcid": "60138" 00:16:49.662 }, 00:16:49.662 "auth": { 00:16:49.662 "state": "completed", 00:16:49.662 "digest": "sha512", 00:16:49.662 "dhgroup": "ffdhe2048" 00:16:49.662 } 00:16:49.662 } 00:16:49.662 ]' 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.662 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.920 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.920 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.920 04:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.920 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:49.920 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.487 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.745 04:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.004 00:16:51.004 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.004 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.004 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.262 { 00:16:51.262 "cntlid": 113, 00:16:51.262 "qid": 0, 00:16:51.262 "state": "enabled", 00:16:51.262 "thread": "nvmf_tgt_poll_group_000", 00:16:51.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:51.262 "listen_address": { 00:16:51.262 "trtype": "TCP", 00:16:51.262 "adrfam": "IPv4", 00:16:51.262 "traddr": "10.0.0.2", 00:16:51.262 "trsvcid": "4420" 00:16:51.262 }, 00:16:51.262 "peer_address": { 00:16:51.262 "trtype": "TCP", 00:16:51.262 "adrfam": "IPv4", 00:16:51.262 "traddr": "10.0.0.1", 00:16:51.262 "trsvcid": "60178" 00:16:51.262 }, 00:16:51.262 "auth": { 00:16:51.262 "state": "completed", 00:16:51.262 "digest": "sha512", 00:16:51.262 "dhgroup": "ffdhe3072" 00:16:51.262 } 00:16:51.262 } 00:16:51.262 ]' 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.262 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.520 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.520 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.520 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.520 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:51.520 04:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.086 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.344 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.602 00:16:52.603 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.603 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.603 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.861 { 00:16:52.861 "cntlid": 115, 00:16:52.861 "qid": 0, 00:16:52.861 "state": "enabled", 00:16:52.861 "thread": "nvmf_tgt_poll_group_000", 00:16:52.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:52.861 "listen_address": { 00:16:52.861 "trtype": "TCP", 00:16:52.861 "adrfam": "IPv4", 00:16:52.861 "traddr": "10.0.0.2", 00:16:52.861 "trsvcid": "4420" 00:16:52.861 }, 00:16:52.861 "peer_address": { 00:16:52.861 "trtype": "TCP", 00:16:52.861 "adrfam": "IPv4", 00:16:52.861 "traddr": "10.0.0.1", 00:16:52.861 "trsvcid": "60196" 00:16:52.861 }, 00:16:52.861 "auth": { 00:16:52.861 "state": "completed", 00:16:52.861 "digest": "sha512", 00:16:52.861 "dhgroup": "ffdhe3072" 00:16:52.861 } 00:16:52.861 } 00:16:52.861 ]' 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.861 04:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.119 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:53.119 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.685 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.943 04:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.201 00:16:54.201 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.201 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.201 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.459 { 00:16:54.459 "cntlid": 117, 00:16:54.459 "qid": 0, 00:16:54.459 "state": "enabled", 00:16:54.459 "thread": "nvmf_tgt_poll_group_000", 00:16:54.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:54.459 "listen_address": { 00:16:54.459 "trtype": "TCP", 00:16:54.459 "adrfam": "IPv4", 00:16:54.459 "traddr": "10.0.0.2", 00:16:54.459 "trsvcid": "4420" 00:16:54.459 }, 00:16:54.459 "peer_address": { 00:16:54.459 "trtype": "TCP", 00:16:54.459 "adrfam": "IPv4", 00:16:54.459 "traddr": "10.0.0.1", 00:16:54.459 "trsvcid": "60220" 00:16:54.459 }, 00:16:54.459 "auth": { 00:16:54.459 "state": "completed", 00:16:54.459 "digest": "sha512", 00:16:54.459 "dhgroup": "ffdhe3072" 00:16:54.459 } 00:16:54.459 } 00:16:54.459 ]' 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.459 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.718 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:54.718 04:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.285 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.544 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.802 00:16:55.802 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.802 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.802 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.061 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.061 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.061 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.061 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.061 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.061 04:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.061 { 00:16:56.061 "cntlid": 119, 00:16:56.061 "qid": 0, 00:16:56.061 "state": "enabled", 00:16:56.061 "thread": "nvmf_tgt_poll_group_000", 00:16:56.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:56.061 "listen_address": { 00:16:56.061 "trtype": "TCP", 00:16:56.061 "adrfam": "IPv4", 00:16:56.061 "traddr": "10.0.0.2", 00:16:56.061 "trsvcid": "4420" 00:16:56.061 }, 00:16:56.061 "peer_address": { 00:16:56.061 "trtype": "TCP", 00:16:56.061 "adrfam": "IPv4", 00:16:56.061 "traddr": "10.0.0.1", 00:16:56.061 "trsvcid": "59624" 00:16:56.061 }, 00:16:56.061 "auth": { 00:16:56.061 "state": "completed", 00:16:56.061 "digest": "sha512", 00:16:56.061 "dhgroup": "ffdhe3072" 00:16:56.061 } 00:16:56.061 } 00:16:56.061 ]' 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.061 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.320 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:56.320 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.888 04:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.147 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.406 00:16:57.406 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.406 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.406 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.665 { 00:16:57.665 "cntlid": 121, 00:16:57.665 "qid": 0, 00:16:57.665 "state": "enabled", 00:16:57.665 "thread": "nvmf_tgt_poll_group_000", 00:16:57.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:57.665 "listen_address": { 00:16:57.665 "trtype": "TCP", 00:16:57.665 "adrfam": "IPv4", 00:16:57.665 "traddr": "10.0.0.2", 00:16:57.665 "trsvcid": "4420" 00:16:57.665 }, 00:16:57.665 "peer_address": { 00:16:57.665 "trtype": "TCP", 00:16:57.665 "adrfam": "IPv4", 00:16:57.665 "traddr": "10.0.0.1", 00:16:57.665 "trsvcid": "59644" 00:16:57.665 }, 00:16:57.665 "auth": { 00:16:57.665 "state": "completed", 00:16:57.665 "digest": "sha512", 00:16:57.665 "dhgroup": "ffdhe4096" 00:16:57.665 } 00:16:57.665 } 00:16:57.665 ]' 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.665 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.925 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:57.925 04:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.492 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.493 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.752 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.010 00:16:59.011 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.011 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.011 04:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.269 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.269 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.269 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.269 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.269 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.269 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.269 { 00:16:59.269 "cntlid": 123, 00:16:59.269 "qid": 0, 00:16:59.269 "state": "enabled", 00:16:59.269 "thread": "nvmf_tgt_poll_group_000", 00:16:59.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:59.269 "listen_address": { 00:16:59.269 "trtype": "TCP", 00:16:59.269 "adrfam": "IPv4", 00:16:59.269 "traddr": "10.0.0.2", 00:16:59.269 "trsvcid": "4420" 00:16:59.269 }, 00:16:59.269 "peer_address": { 00:16:59.269 "trtype": "TCP", 00:16:59.269 "adrfam": "IPv4", 00:16:59.269 "traddr": "10.0.0.1", 00:16:59.269 "trsvcid": "59668" 00:16:59.269 }, 00:16:59.269 "auth": { 00:16:59.269 "state": "completed", 00:16:59.269 "digest": "sha512", 00:16:59.269 "dhgroup": "ffdhe4096" 00:16:59.269 } 00:16:59.269 } 00:16:59.269 ]' 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.270 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.529 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:16:59.529 04:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.097 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.356 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.615 00:17:00.615 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.615 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.615 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.873 { 00:17:00.873 "cntlid": 125, 00:17:00.873 "qid": 0, 00:17:00.873 "state": "enabled", 00:17:00.873 "thread": "nvmf_tgt_poll_group_000", 00:17:00.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:00.873 "listen_address": { 00:17:00.873 "trtype": "TCP", 00:17:00.873 "adrfam": "IPv4", 00:17:00.873 "traddr": "10.0.0.2", 00:17:00.873 "trsvcid": "4420" 00:17:00.873 }, 00:17:00.873 "peer_address": { 00:17:00.873 "trtype": "TCP", 00:17:00.873 "adrfam": "IPv4", 00:17:00.873 "traddr": "10.0.0.1", 00:17:00.873 "trsvcid": "59696" 00:17:00.873 }, 00:17:00.873 "auth": { 00:17:00.873 "state": "completed", 00:17:00.873 "digest": "sha512", 00:17:00.873 "dhgroup": "ffdhe4096" 00:17:00.873 } 00:17:00.873 } 00:17:00.873 ]' 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.873 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.874 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.874 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.874 04:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.133 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:17:01.133 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.701 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.960 04:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.219 00:17:02.219 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.219 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.219 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.478 { 00:17:02.478 "cntlid": 127, 00:17:02.478 "qid": 0, 00:17:02.478 "state": "enabled", 00:17:02.478 "thread": "nvmf_tgt_poll_group_000", 00:17:02.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:02.478 "listen_address": { 00:17:02.478 "trtype": "TCP", 00:17:02.478 "adrfam": "IPv4", 00:17:02.478 "traddr": "10.0.0.2", 00:17:02.478 "trsvcid": "4420" 00:17:02.478 }, 00:17:02.478 "peer_address": { 00:17:02.478 "trtype": "TCP", 00:17:02.478 "adrfam": "IPv4", 00:17:02.478 "traddr": "10.0.0.1", 00:17:02.478 "trsvcid": "59712" 00:17:02.478 }, 00:17:02.478 "auth": { 00:17:02.478 "state": "completed", 00:17:02.478 "digest": "sha512", 00:17:02.478 "dhgroup": "ffdhe4096" 00:17:02.478 } 00:17:02.478 } 00:17:02.478 ]' 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.478 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.737 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:02.737 04:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.305 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.564 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.565 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.824 00:17:03.824 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.824 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.824 04:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.084 { 00:17:04.084 "cntlid": 129, 00:17:04.084 "qid": 0, 00:17:04.084 "state": "enabled", 00:17:04.084 "thread": "nvmf_tgt_poll_group_000", 00:17:04.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:04.084 "listen_address": { 00:17:04.084 "trtype": "TCP", 00:17:04.084 "adrfam": "IPv4", 00:17:04.084 "traddr": "10.0.0.2", 00:17:04.084 "trsvcid": "4420" 00:17:04.084 }, 00:17:04.084 "peer_address": { 00:17:04.084 "trtype": "TCP", 00:17:04.084 "adrfam": "IPv4", 00:17:04.084 "traddr": "10.0.0.1", 00:17:04.084 "trsvcid": "59724" 00:17:04.084 }, 00:17:04.084 "auth": { 00:17:04.084 "state": "completed", 00:17:04.084 "digest": "sha512", 00:17:04.084 "dhgroup": "ffdhe6144" 00:17:04.084 } 00:17:04.084 } 00:17:04.084 ]' 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.084 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.342 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:17:04.342 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:17:04.909 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.909 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:04.910 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.910 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.910 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.910 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.910 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.910 04:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.168 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.427 00:17:05.427 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.427 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.427 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.686 { 00:17:05.686 "cntlid": 131, 00:17:05.686 "qid": 0, 00:17:05.686 "state": "enabled", 00:17:05.686 "thread": "nvmf_tgt_poll_group_000", 00:17:05.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:05.686 "listen_address": { 00:17:05.686 "trtype": "TCP", 00:17:05.686 "adrfam": "IPv4", 00:17:05.686 "traddr": "10.0.0.2", 00:17:05.686 "trsvcid": "4420" 00:17:05.686 }, 00:17:05.686 "peer_address": { 00:17:05.686 "trtype": "TCP", 00:17:05.686 "adrfam": "IPv4", 00:17:05.686 "traddr": "10.0.0.1", 00:17:05.686 "trsvcid": "59752" 00:17:05.686 }, 00:17:05.686 "auth": { 00:17:05.686 "state": "completed", 00:17:05.686 "digest": "sha512", 00:17:05.686 "dhgroup": "ffdhe6144" 00:17:05.686 } 00:17:05.686 } 00:17:05.686 ]' 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.686 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.945 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.945 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.945 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.945 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.945 04:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.945 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:17:05.945 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:17:06.513 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.513 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.513 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.513 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.771 04:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.338 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.338 { 00:17:07.338 "cntlid": 133, 00:17:07.338 "qid": 0, 00:17:07.338 "state": "enabled", 00:17:07.338 "thread": "nvmf_tgt_poll_group_000", 00:17:07.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:07.338 "listen_address": { 00:17:07.338 "trtype": "TCP", 00:17:07.338 "adrfam": "IPv4", 00:17:07.338 "traddr": "10.0.0.2", 00:17:07.338 "trsvcid": "4420" 00:17:07.338 }, 00:17:07.338 "peer_address": { 00:17:07.338 "trtype": "TCP", 00:17:07.338 "adrfam": "IPv4", 00:17:07.338 "traddr": "10.0.0.1", 00:17:07.338 "trsvcid": "34776" 00:17:07.338 }, 00:17:07.338 "auth": { 00:17:07.338 "state": "completed", 00:17:07.338 "digest": "sha512", 00:17:07.338 "dhgroup": "ffdhe6144" 00:17:07.338 } 00:17:07.338 } 00:17:07.338 ]' 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.338 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.598 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.598 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.598 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.598 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.598 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.857 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:17:07.857 04:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.425 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.993 00:17:08.993 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.993 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.994 04:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.994 { 00:17:08.994 "cntlid": 135, 00:17:08.994 "qid": 0, 00:17:08.994 "state": "enabled", 00:17:08.994 "thread": "nvmf_tgt_poll_group_000", 00:17:08.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:08.994 "listen_address": { 00:17:08.994 "trtype": "TCP", 00:17:08.994 "adrfam": "IPv4", 00:17:08.994 "traddr": "10.0.0.2", 00:17:08.994 "trsvcid": "4420" 00:17:08.994 }, 00:17:08.994 "peer_address": { 00:17:08.994 "trtype": "TCP", 00:17:08.994 "adrfam": "IPv4", 00:17:08.994 "traddr": "10.0.0.1", 00:17:08.994 "trsvcid": "34812" 00:17:08.994 }, 00:17:08.994 "auth": { 00:17:08.994 "state": "completed", 00:17:08.994 "digest": "sha512", 00:17:08.994 "dhgroup": "ffdhe6144" 00:17:08.994 } 00:17:08.994 } 00:17:08.994 ]' 00:17:08.994 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.252 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.511 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:09.511 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:10.080 04:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.080 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.339 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.339 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.339 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.339 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.598 00:17:10.598 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.598 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.598 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.858 { 00:17:10.858 "cntlid": 137, 00:17:10.858 "qid": 0, 00:17:10.858 "state": "enabled", 00:17:10.858 "thread": "nvmf_tgt_poll_group_000", 00:17:10.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:10.858 "listen_address": { 00:17:10.858 "trtype": "TCP", 00:17:10.858 "adrfam": "IPv4", 00:17:10.858 "traddr": "10.0.0.2", 00:17:10.858 "trsvcid": "4420" 00:17:10.858 }, 00:17:10.858 "peer_address": { 00:17:10.858 "trtype": "TCP", 00:17:10.858 "adrfam": "IPv4", 00:17:10.858 "traddr": "10.0.0.1", 00:17:10.858 "trsvcid": "34848" 00:17:10.858 }, 00:17:10.858 "auth": { 00:17:10.858 "state": "completed", 00:17:10.858 "digest": "sha512", 00:17:10.858 "dhgroup": "ffdhe8192" 00:17:10.858 } 00:17:10.858 } 00:17:10.858 ]' 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.858 04:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:17:11.116 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:17:11.682 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.682 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:11.682 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.682 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.941 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.941 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.941 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.941 04:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.941 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.509 00:17:12.509 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.509 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.509 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.768 { 00:17:12.768 "cntlid": 139, 00:17:12.768 "qid": 0, 00:17:12.768 "state": "enabled", 00:17:12.768 "thread": "nvmf_tgt_poll_group_000", 00:17:12.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:12.768 "listen_address": { 00:17:12.768 "trtype": "TCP", 00:17:12.768 "adrfam": "IPv4", 00:17:12.768 "traddr": "10.0.0.2", 00:17:12.768 "trsvcid": "4420" 00:17:12.768 }, 00:17:12.768 "peer_address": { 00:17:12.768 "trtype": "TCP", 00:17:12.768 "adrfam": "IPv4", 00:17:12.768 "traddr": "10.0.0.1", 00:17:12.768 "trsvcid": "34876" 00:17:12.768 }, 00:17:12.768 "auth": { 00:17:12.768 "state": "completed", 00:17:12.768 "digest": "sha512", 00:17:12.768 "dhgroup": "ffdhe8192" 00:17:12.768 } 00:17:12.768 } 00:17:12.768 ]' 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.768 04:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.027 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:17:13.027 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: --dhchap-ctrl-secret DHHC-1:02:MmM5ZThkNDYyNjg0YjE0NzU0ZTE1NDNiMTk2Nzk0MDI5Y2YyOTg0NjczZWI1Y2VidAwnCw==: 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.595 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.854 04:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.430 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.430 { 00:17:14.430 "cntlid": 141, 00:17:14.430 "qid": 0, 00:17:14.430 "state": "enabled", 00:17:14.430 "thread": "nvmf_tgt_poll_group_000", 00:17:14.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:14.430 "listen_address": { 00:17:14.430 "trtype": "TCP", 00:17:14.430 "adrfam": "IPv4", 00:17:14.430 "traddr": "10.0.0.2", 00:17:14.430 "trsvcid": "4420" 00:17:14.430 }, 00:17:14.430 "peer_address": { 00:17:14.430 "trtype": "TCP", 00:17:14.430 "adrfam": "IPv4", 00:17:14.430 "traddr": "10.0.0.1", 00:17:14.430 "trsvcid": "34902" 00:17:14.430 }, 00:17:14.430 "auth": { 00:17:14.430 "state": "completed", 00:17:14.430 "digest": "sha512", 00:17:14.430 "dhgroup": "ffdhe8192" 00:17:14.430 } 00:17:14.430 } 00:17:14.430 ]' 00:17:14.430 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.688 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.947 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:17:14.947 04:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:01:ZmJiMDkwMjhmMTBiNDczZmE3NDc0OGMzNjczMTgzNDKoOvnM: 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.515 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.774 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.774 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.774 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.774 04:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.035 00:17:16.035 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.035 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.035 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.307 { 00:17:16.307 "cntlid": 143, 00:17:16.307 "qid": 0, 00:17:16.307 "state": "enabled", 00:17:16.307 "thread": "nvmf_tgt_poll_group_000", 00:17:16.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:16.307 "listen_address": { 00:17:16.307 "trtype": "TCP", 00:17:16.307 "adrfam": "IPv4", 00:17:16.307 "traddr": "10.0.0.2", 00:17:16.307 "trsvcid": "4420" 00:17:16.307 }, 00:17:16.307 "peer_address": { 00:17:16.307 "trtype": "TCP", 00:17:16.307 "adrfam": "IPv4", 00:17:16.307 "traddr": "10.0.0.1", 00:17:16.307 "trsvcid": "41426" 00:17:16.307 }, 00:17:16.307 "auth": { 00:17:16.307 "state": "completed", 00:17:16.307 "digest": "sha512", 00:17:16.307 "dhgroup": "ffdhe8192" 00:17:16.307 } 00:17:16.307 } 00:17:16.307 ]' 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.307 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:16.591 04:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.218 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.477 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.045 00:17:18.045 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.045 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.045 04:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.045 { 00:17:18.045 "cntlid": 145, 00:17:18.045 "qid": 0, 00:17:18.045 "state": "enabled", 00:17:18.045 "thread": "nvmf_tgt_poll_group_000", 00:17:18.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:18.045 "listen_address": { 00:17:18.045 "trtype": "TCP", 00:17:18.045 "adrfam": "IPv4", 00:17:18.045 "traddr": "10.0.0.2", 00:17:18.045 "trsvcid": "4420" 00:17:18.045 }, 00:17:18.045 "peer_address": { 00:17:18.045 "trtype": "TCP", 00:17:18.045 "adrfam": "IPv4", 00:17:18.045 "traddr": "10.0.0.1", 00:17:18.045 "trsvcid": "41450" 00:17:18.045 }, 00:17:18.045 "auth": { 00:17:18.045 "state": "completed", 00:17:18.045 "digest": "sha512", 00:17:18.045 "dhgroup": "ffdhe8192" 00:17:18.045 } 00:17:18.045 } 00:17:18.045 ]' 00:17:18.045 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.304 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.563 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:17:18.563 04:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDQwY2JkNTQ1MTZiMGE0M2Y4MmM1NTVkZWIwZDUyNTQwYzQxYzg2MWI2OGU5MmY3bd2j1A==: --dhchap-ctrl-secret DHHC-1:03:YzVhNDdiMjAyYzc5OWY5OTc0MTUzMjEzZWEwMDU3Y2VkN2E4NTM0ODllZWUzMGNlOTU3OGI4YTA0MzZmMjQ1MeHaBCw=: 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:19.131 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:19.390 request: 00:17:19.390 { 00:17:19.390 "name": "nvme0", 00:17:19.390 "trtype": "tcp", 00:17:19.390 "traddr": "10.0.0.2", 00:17:19.390 "adrfam": "ipv4", 00:17:19.390 "trsvcid": "4420", 00:17:19.390 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:19.390 "prchk_reftag": false, 00:17:19.390 "prchk_guard": false, 00:17:19.390 "hdgst": false, 00:17:19.390 "ddgst": false, 00:17:19.390 "dhchap_key": "key2", 00:17:19.390 "allow_unrecognized_csi": false, 00:17:19.390 "method": "bdev_nvme_attach_controller", 00:17:19.390 "req_id": 1 00:17:19.390 } 00:17:19.390 Got JSON-RPC error response 00:17:19.390 response: 00:17:19.390 { 00:17:19.390 "code": -5, 00:17:19.390 "message": "Input/output error" 00:17:19.390 } 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.390 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.649 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.650 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.909 request: 00:17:19.909 { 00:17:19.909 "name": "nvme0", 00:17:19.909 "trtype": "tcp", 00:17:19.909 "traddr": "10.0.0.2", 00:17:19.909 "adrfam": "ipv4", 00:17:19.909 "trsvcid": "4420", 00:17:19.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:19.909 "prchk_reftag": false, 00:17:19.909 "prchk_guard": false, 00:17:19.909 "hdgst": false, 00:17:19.909 "ddgst": false, 00:17:19.909 "dhchap_key": "key1", 00:17:19.909 "dhchap_ctrlr_key": "ckey2", 00:17:19.909 "allow_unrecognized_csi": false, 00:17:19.909 "method": "bdev_nvme_attach_controller", 00:17:19.909 "req_id": 1 00:17:19.909 } 00:17:19.909 Got JSON-RPC error response 00:17:19.909 response: 00:17:19.909 { 00:17:19.909 "code": -5, 00:17:19.909 "message": "Input/output error" 00:17:19.909 } 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.909 04:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.909 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.477 request: 00:17:20.477 { 00:17:20.477 "name": "nvme0", 00:17:20.477 "trtype": "tcp", 00:17:20.477 "traddr": "10.0.0.2", 00:17:20.477 "adrfam": "ipv4", 00:17:20.477 "trsvcid": "4420", 00:17:20.477 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:20.477 "prchk_reftag": false, 00:17:20.477 "prchk_guard": false, 00:17:20.477 "hdgst": false, 00:17:20.477 "ddgst": false, 00:17:20.477 "dhchap_key": "key1", 00:17:20.477 "dhchap_ctrlr_key": "ckey1", 00:17:20.477 "allow_unrecognized_csi": false, 00:17:20.477 "method": "bdev_nvme_attach_controller", 00:17:20.477 "req_id": 1 00:17:20.477 } 00:17:20.477 Got JSON-RPC error response 00:17:20.477 response: 00:17:20.477 { 00:17:20.477 "code": -5, 00:17:20.477 "message": "Input/output error" 00:17:20.477 } 00:17:20.477 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.477 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.477 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 605773 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 605773 ']' 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 605773 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605773 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605773' 00:17:20.478 killing process with pid 605773 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 605773 00:17:20.478 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 605773 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=627690 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 627690 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 627690 ']' 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.737 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 627690 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 627690 ']' 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.997 04:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 null0 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Svr 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.xzU ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xzU 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IAh 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.68j ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.68j 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8HE 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.6Oh ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Oh 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VxF 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.257 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.257 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.257 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.257 04:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.194 nvme0n1 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.194 { 00:17:22.194 "cntlid": 1, 00:17:22.194 "qid": 0, 00:17:22.194 "state": "enabled", 00:17:22.194 "thread": "nvmf_tgt_poll_group_000", 00:17:22.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:22.194 "listen_address": { 00:17:22.194 "trtype": "TCP", 00:17:22.194 "adrfam": "IPv4", 00:17:22.194 "traddr": "10.0.0.2", 00:17:22.194 "trsvcid": "4420" 00:17:22.194 }, 00:17:22.194 "peer_address": { 00:17:22.194 "trtype": "TCP", 00:17:22.194 "adrfam": "IPv4", 00:17:22.194 "traddr": "10.0.0.1", 00:17:22.194 "trsvcid": "41502" 00:17:22.194 }, 00:17:22.194 "auth": { 00:17:22.194 "state": "completed", 00:17:22.194 "digest": "sha512", 00:17:22.194 "dhgroup": "ffdhe8192" 00:17:22.194 } 00:17:22.194 } 00:17:22.194 ]' 00:17:22.194 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.453 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.711 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:22.711 04:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:23.279 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.538 request: 00:17:23.538 { 00:17:23.538 "name": "nvme0", 00:17:23.538 "trtype": "tcp", 00:17:23.538 "traddr": "10.0.0.2", 00:17:23.538 "adrfam": "ipv4", 00:17:23.538 "trsvcid": "4420", 00:17:23.538 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:23.538 "prchk_reftag": false, 00:17:23.538 "prchk_guard": false, 00:17:23.538 "hdgst": false, 00:17:23.538 "ddgst": false, 00:17:23.538 "dhchap_key": "key3", 00:17:23.538 "allow_unrecognized_csi": false, 00:17:23.538 "method": "bdev_nvme_attach_controller", 00:17:23.538 "req_id": 1 00:17:23.538 } 00:17:23.538 Got JSON-RPC error response 00:17:23.538 response: 00:17:23.538 { 00:17:23.538 "code": -5, 00:17:23.538 "message": "Input/output error" 00:17:23.538 } 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:23.538 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.796 04:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.055 request: 00:17:24.055 { 00:17:24.055 "name": "nvme0", 00:17:24.055 "trtype": "tcp", 00:17:24.055 "traddr": "10.0.0.2", 00:17:24.055 "adrfam": "ipv4", 00:17:24.055 "trsvcid": "4420", 00:17:24.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:24.055 "prchk_reftag": false, 00:17:24.055 "prchk_guard": false, 00:17:24.055 "hdgst": false, 00:17:24.055 "ddgst": false, 00:17:24.055 "dhchap_key": "key3", 00:17:24.055 "allow_unrecognized_csi": false, 00:17:24.055 "method": "bdev_nvme_attach_controller", 00:17:24.055 "req_id": 1 00:17:24.055 } 00:17:24.055 Got JSON-RPC error response 00:17:24.055 response: 00:17:24.055 { 00:17:24.055 "code": -5, 00:17:24.055 "message": "Input/output error" 00:17:24.055 } 00:17:24.055 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.055 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.055 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.056 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.315 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.573 request: 00:17:24.573 { 00:17:24.573 "name": "nvme0", 00:17:24.573 "trtype": "tcp", 00:17:24.573 "traddr": "10.0.0.2", 00:17:24.573 "adrfam": "ipv4", 00:17:24.573 "trsvcid": "4420", 00:17:24.573 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:24.573 "prchk_reftag": false, 00:17:24.573 "prchk_guard": false, 00:17:24.573 "hdgst": false, 00:17:24.573 "ddgst": false, 00:17:24.573 "dhchap_key": "key0", 00:17:24.573 "dhchap_ctrlr_key": "key1", 00:17:24.573 "allow_unrecognized_csi": false, 00:17:24.573 "method": "bdev_nvme_attach_controller", 00:17:24.573 "req_id": 1 00:17:24.573 } 00:17:24.573 Got JSON-RPC error response 00:17:24.573 response: 00:17:24.573 { 00:17:24.573 "code": -5, 00:17:24.573 "message": "Input/output error" 00:17:24.573 } 00:17:24.573 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.574 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.574 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.574 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.574 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:24.574 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:24.574 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:24.832 nvme0n1 00:17:24.832 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:24.832 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:24.832 04:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.091 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.091 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.091 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:25.350 04:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:25.927 nvme0n1 00:17:25.927 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:25.927 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:25.927 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.189 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:26.448 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.448 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:26.448 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: --dhchap-ctrl-secret DHHC-1:03:MjkxNmFiZDFlOWI5NGMxYmMwYWY3MzZmYjAzMWNkN2MyYWYxZGJhNWUzMWViYWEzMDQxNGIwNjMxNzNmZjllOGtnAio=: 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.016 04:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:27.276 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:27.535 request: 00:17:27.535 { 00:17:27.535 "name": "nvme0", 00:17:27.535 "trtype": "tcp", 00:17:27.535 "traddr": "10.0.0.2", 00:17:27.535 "adrfam": "ipv4", 00:17:27.535 "trsvcid": "4420", 00:17:27.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:17:27.535 "prchk_reftag": false, 00:17:27.535 "prchk_guard": false, 00:17:27.535 "hdgst": false, 00:17:27.535 "ddgst": false, 00:17:27.535 "dhchap_key": "key1", 00:17:27.535 "allow_unrecognized_csi": false, 00:17:27.535 "method": "bdev_nvme_attach_controller", 00:17:27.535 "req_id": 1 00:17:27.535 } 00:17:27.535 Got JSON-RPC error response 00:17:27.535 response: 00:17:27.535 { 00:17:27.535 "code": -5, 00:17:27.535 "message": "Input/output error" 00:17:27.535 } 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.535 04:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:28.472 nvme0n1 00:17:28.472 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:28.472 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:28.472 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.472 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.472 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.472 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:28.731 04:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:28.990 nvme0n1 00:17:28.990 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:28.990 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:28.990 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.248 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.248 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.248 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: '' 2s 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: ]] 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWQ0YWNkZjdkZDJjNTJjM2FmOThkNDkxMGIyZGZlNDHSE0DT: 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:29.507 04:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: 2s 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: ]] 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjdmMWQ3ODI2MDdmMGMzNzZhYjU0NDZlMzhjZGE5MjY2NWVjNmQzNzJhY2VlODgyw0a4KA==: 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:31.411 04:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.948 04:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:34.206 nvme0n1 00:17:34.206 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.206 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.206 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.206 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.465 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.465 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.724 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:34.724 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:34.724 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.983 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.983 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:34.983 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.983 04:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.983 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.983 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:34.983 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:35.242 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:35.242 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.242 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.501 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.760 request: 00:17:35.760 { 00:17:35.760 "name": "nvme0", 00:17:35.760 "dhchap_key": "key1", 00:17:35.760 "dhchap_ctrlr_key": "key3", 00:17:35.760 "method": "bdev_nvme_set_keys", 00:17:35.760 "req_id": 1 00:17:35.760 } 00:17:35.760 Got JSON-RPC error response 00:17:35.760 response: 00:17:35.760 { 00:17:35.760 "code": -13, 00:17:35.760 "message": "Permission denied" 00:17:35.760 } 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:35.760 04:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.019 04:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:36.019 04:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:36.957 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:36.957 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:36.957 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.216 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:38.154 nvme0n1 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:38.154 04:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.154 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.154 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.413 request: 00:17:38.413 { 00:17:38.413 "name": "nvme0", 00:17:38.413 "dhchap_key": "key2", 00:17:38.413 "dhchap_ctrlr_key": "key0", 00:17:38.413 "method": "bdev_nvme_set_keys", 00:17:38.413 "req_id": 1 00:17:38.413 } 00:17:38.413 Got JSON-RPC error response 00:17:38.413 response: 00:17:38.413 { 00:17:38.413 "code": -13, 00:17:38.413 "message": "Permission denied" 00:17:38.413 } 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.413 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:38.672 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:38.672 04:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:39.609 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:39.609 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:39.609 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 605910 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 605910 ']' 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 605910 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605910 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605910' 00:17:39.868 killing process with pid 605910 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 605910 00:17:39.868 04:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 605910 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.127 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.127 rmmod nvme_tcp 00:17:40.386 rmmod nvme_fabrics 00:17:40.386 rmmod nvme_keyring 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 627690 ']' 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 627690 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 627690 ']' 00:17:40.386 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 627690 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627690 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627690' 00:17:40.387 killing process with pid 627690 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 627690 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 627690 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.387 04:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Svr /tmp/spdk.key-sha256.IAh /tmp/spdk.key-sha384.8HE /tmp/spdk.key-sha512.VxF /tmp/spdk.key-sha512.xzU /tmp/spdk.key-sha384.68j /tmp/spdk.key-sha256.6Oh '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:42.924 00:17:42.924 real 2m31.842s 00:17:42.924 user 5m49.669s 00:17:42.924 sys 0m24.217s 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.924 ************************************ 00:17:42.924 END TEST nvmf_auth_target 00:17:42.924 ************************************ 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.924 ************************************ 00:17:42.924 START TEST nvmf_bdevio_no_huge 00:17:42.924 ************************************ 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:42.924 * Looking for test storage... 00:17:42.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.924 --rc genhtml_branch_coverage=1 00:17:42.924 --rc genhtml_function_coverage=1 00:17:42.924 --rc genhtml_legend=1 00:17:42.924 --rc geninfo_all_blocks=1 00:17:42.924 --rc geninfo_unexecuted_blocks=1 00:17:42.924 00:17:42.924 ' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.924 --rc genhtml_branch_coverage=1 00:17:42.924 --rc genhtml_function_coverage=1 00:17:42.924 --rc genhtml_legend=1 00:17:42.924 --rc geninfo_all_blocks=1 00:17:42.924 --rc geninfo_unexecuted_blocks=1 00:17:42.924 00:17:42.924 ' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.924 --rc genhtml_branch_coverage=1 00:17:42.924 --rc genhtml_function_coverage=1 00:17:42.924 --rc genhtml_legend=1 00:17:42.924 --rc geninfo_all_blocks=1 00:17:42.924 --rc geninfo_unexecuted_blocks=1 00:17:42.924 00:17:42.924 ' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.924 --rc genhtml_branch_coverage=1 00:17:42.924 --rc genhtml_function_coverage=1 00:17:42.924 --rc genhtml_legend=1 00:17:42.924 --rc geninfo_all_blocks=1 00:17:42.924 --rc geninfo_unexecuted_blocks=1 00:17:42.924 00:17:42.924 ' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:42.924 04:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:49.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:49.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:49.556 Found net devices under 0000:af:00.0: cvl_0_0 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:49.556 Found net devices under 0000:af:00.1: cvl_0_1 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.556 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:49.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:17:49.557 00:17:49.557 --- 10.0.0.2 ping statistics --- 00:17:49.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.557 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:17:49.557 00:17:49.557 --- 10.0.0.1 ping statistics --- 00:17:49.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.557 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=634432 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 634432 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 634432 ']' 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.557 04:54:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.557 [2024-12-10 04:54:39.941507] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:49.557 [2024-12-10 04:54:39.941558] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:49.557 [2024-12-10 04:54:40.028949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.557 [2024-12-10 04:54:40.077456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.557 [2024-12-10 04:54:40.077493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.557 [2024-12-10 04:54:40.077500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.557 [2024-12-10 04:54:40.077506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.557 [2024-12-10 04:54:40.077511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.557 [2024-12-10 04:54:40.078551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.557 [2024-12-10 04:54:40.078659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:49.557 [2024-12-10 04:54:40.078767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.557 [2024-12-10 04:54:40.078769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.817 [2024-12-10 04:54:40.844286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.817 Malloc0 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.817 [2024-12-10 04:54:40.888596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.817 { 00:17:49.817 "params": { 00:17:49.817 "name": "Nvme$subsystem", 00:17:49.817 "trtype": "$TEST_TRANSPORT", 00:17:49.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.817 "adrfam": "ipv4", 00:17:49.817 "trsvcid": "$NVMF_PORT", 00:17:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.817 "hdgst": ${hdgst:-false}, 00:17:49.817 "ddgst": ${ddgst:-false} 00:17:49.817 }, 00:17:49.817 "method": "bdev_nvme_attach_controller" 00:17:49.817 } 00:17:49.817 EOF 00:17:49.817 )") 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:49.817 04:54:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:49.817 "params": { 00:17:49.817 "name": "Nvme1", 00:17:49.817 "trtype": "tcp", 00:17:49.817 "traddr": "10.0.0.2", 00:17:49.817 "adrfam": "ipv4", 00:17:49.817 "trsvcid": "4420", 00:17:49.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.817 "hdgst": false, 00:17:49.817 "ddgst": false 00:17:49.817 }, 00:17:49.817 "method": "bdev_nvme_attach_controller" 00:17:49.817 }' 00:17:49.817 [2024-12-10 04:54:40.941031] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:49.817 [2024-12-10 04:54:40.941074] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid634678 ] 00:17:50.076 [2024-12-10 04:54:41.019773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:50.077 [2024-12-10 04:54:41.067420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.077 [2024-12-10 04:54:41.067527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.077 [2024-12-10 04:54:41.067528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.336 I/O targets: 00:17:50.336 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:50.336 00:17:50.336 00:17:50.336 CUnit - A unit testing framework for C - Version 2.1-3 00:17:50.336 http://cunit.sourceforge.net/ 00:17:50.336 00:17:50.336 00:17:50.336 Suite: bdevio tests on: Nvme1n1 00:17:50.336 Test: blockdev write read block ...passed 00:17:50.595 Test: blockdev write zeroes read block ...passed 00:17:50.595 Test: blockdev write zeroes read no split ...passed 00:17:50.595 Test: blockdev write zeroes read split ...passed 00:17:50.595 Test: blockdev write zeroes read split partial ...passed 00:17:50.595 Test: blockdev reset ...[2024-12-10 04:54:41.516137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:50.595 [2024-12-10 04:54:41.516197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1973be0 (9): Bad file descriptor 00:17:50.595 [2024-12-10 04:54:41.586983] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:50.595 passed 00:17:50.595 Test: blockdev write read 8 blocks ...passed 00:17:50.595 Test: blockdev write read size > 128k ...passed 00:17:50.595 Test: blockdev write read invalid size ...passed 00:17:50.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:50.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:50.595 Test: blockdev write read max offset ...passed 00:17:50.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:50.595 Test: blockdev writev readv 8 blocks ...passed 00:17:50.595 Test: blockdev writev readv 30 x 1block ...passed 00:17:50.854 Test: blockdev writev readv block ...passed 00:17:50.854 Test: blockdev writev readv size > 128k ...passed 00:17:50.854 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:50.854 Test: blockdev comparev and writev ...[2024-12-10 04:54:41.758860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.758887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.758901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.758910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.759122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.759132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.759143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.759150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.759383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.759408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.759415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.759642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.759652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.759663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.854 [2024-12-10 04:54:41.759670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.854 passed 00:17:50.854 Test: blockdev nvme passthru rw ...passed 00:17:50.854 Test: blockdev nvme passthru vendor specific ...[2024-12-10 04:54:41.841550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.854 [2024-12-10 04:54:41.841568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.841674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.854 [2024-12-10 04:54:41.841683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.841782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.854 [2024-12-10 04:54:41.841791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.854 [2024-12-10 04:54:41.841887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.855 [2024-12-10 04:54:41.841896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.855 passed 00:17:50.855 Test: blockdev nvme admin passthru ...passed 00:17:50.855 Test: blockdev copy ...passed 00:17:50.855 00:17:50.855 Run Summary: Type Total Ran Passed Failed Inactive 00:17:50.855 suites 1 1 n/a 0 0 00:17:50.855 tests 23 23 23 0 0 00:17:50.855 asserts 152 152 152 0 n/a 00:17:50.855 00:17:50.855 Elapsed time = 1.030 seconds 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.113 rmmod nvme_tcp 00:17:51.113 rmmod nvme_fabrics 00:17:51.113 rmmod nvme_keyring 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:51.113 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 634432 ']' 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 634432 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 634432 ']' 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 634432 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.114 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634432 00:17:51.373 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:51.373 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:51.373 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634432' 00:17:51.373 killing process with pid 634432 00:17:51.373 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 634432 00:17:51.373 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 634432 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.632 04:54:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.539 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.539 00:17:53.539 real 0m10.992s 00:17:53.539 user 0m14.027s 00:17:53.539 sys 0m5.286s 00:17:53.539 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.539 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.539 ************************************ 00:17:53.539 END TEST nvmf_bdevio_no_huge 00:17:53.539 ************************************ 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.799 ************************************ 00:17:53.799 START TEST nvmf_tls 00:17:53.799 ************************************ 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:53.799 * Looking for test storage... 00:17:53.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.799 --rc genhtml_branch_coverage=1 00:17:53.799 --rc genhtml_function_coverage=1 00:17:53.799 --rc genhtml_legend=1 00:17:53.799 --rc geninfo_all_blocks=1 00:17:53.799 --rc geninfo_unexecuted_blocks=1 00:17:53.799 00:17:53.799 ' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.799 --rc genhtml_branch_coverage=1 00:17:53.799 --rc genhtml_function_coverage=1 00:17:53.799 --rc genhtml_legend=1 00:17:53.799 --rc geninfo_all_blocks=1 00:17:53.799 --rc geninfo_unexecuted_blocks=1 00:17:53.799 00:17:53.799 ' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.799 --rc genhtml_branch_coverage=1 00:17:53.799 --rc genhtml_function_coverage=1 00:17:53.799 --rc genhtml_legend=1 00:17:53.799 --rc geninfo_all_blocks=1 00:17:53.799 --rc geninfo_unexecuted_blocks=1 00:17:53.799 00:17:53.799 ' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:53.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.799 --rc genhtml_branch_coverage=1 00:17:53.799 --rc genhtml_function_coverage=1 00:17:53.799 --rc genhtml_legend=1 00:17:53.799 --rc geninfo_all_blocks=1 00:17:53.799 --rc geninfo_unexecuted_blocks=1 00:17:53.799 00:17:53.799 ' 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.799 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.800 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.060 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:54.060 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:54.060 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:54.060 04:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.630 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:00.631 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:00.631 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:00.631 Found net devices under 0000:af:00.0: cvl_0_0 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:00.631 Found net devices under 0000:af:00.1: cvl_0_1 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:00.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:18:00.631 00:18:00.631 --- 10.0.0.2 ping statistics --- 00:18:00.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.631 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:18:00.631 00:18:00.631 --- 10.0.0.1 ping statistics --- 00:18:00.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.631 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.631 04:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=638374 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 638374 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 638374 ']' 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.631 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.631 [2024-12-10 04:54:51.055920] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:00.631 [2024-12-10 04:54:51.055962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.631 [2024-12-10 04:54:51.131657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.632 [2024-12-10 04:54:51.169306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.632 [2024-12-10 04:54:51.169338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.632 [2024-12-10 04:54:51.169345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.632 [2024-12-10 04:54:51.169351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.632 [2024-12-10 04:54:51.169355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.632 [2024-12-10 04:54:51.169841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:00.632 true 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:00.632 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:00.891 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.891 04:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:00.891 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:00.891 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:00.891 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:01.150 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.150 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:01.409 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:01.409 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:01.409 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.409 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:01.667 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:01.667 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:01.667 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:01.667 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.667 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:01.926 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:01.926 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:01.926 04:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:02.185 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:02.185 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:02.444 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:02.444 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.E3v8zS9Dgy 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.z32TWrbelL 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.E3v8zS9Dgy 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.z32TWrbelL 00:18:02.445 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.704 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:02.963 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.E3v8zS9Dgy 00:18:02.963 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.E3v8zS9Dgy 00:18:02.963 04:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.963 [2024-12-10 04:54:54.048549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.963 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.222 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.481 [2024-12-10 04:54:54.405445] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.481 [2024-12-10 04:54:54.405681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.481 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.481 malloc0 00:18:03.740 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.740 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.E3v8zS9Dgy 00:18:04.044 04:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.301 04:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.E3v8zS9Dgy 00:18:14.288 Initializing NVMe Controllers 00:18:14.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:14.288 Initialization complete. Launching workers. 00:18:14.288 ======================================================== 00:18:14.288 Latency(us) 00:18:14.288 Device Information : IOPS MiB/s Average min max 00:18:14.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16801.08 65.63 3809.39 811.56 5995.29 00:18:14.288 ======================================================== 00:18:14.288 Total : 16801.08 65.63 3809.39 811.56 5995.29 00:18:14.288 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.E3v8zS9Dgy 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.E3v8zS9Dgy 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=640656 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 640656 /var/tmp/bdevperf.sock 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 640656 ']' 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.288 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.288 [2024-12-10 04:55:05.360785] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:14.288 [2024-12-10 04:55:05.360835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640656 ] 00:18:14.547 [2024-12-10 04:55:05.435129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.547 [2024-12-10 04:55:05.475798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.547 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.547 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.547 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.E3v8zS9Dgy 00:18:14.806 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.806 [2024-12-10 04:55:05.912424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.065 TLSTESTn1 00:18:15.065 04:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:15.065 Running I/O for 10 seconds... 00:18:17.380 5500.00 IOPS, 21.48 MiB/s [2024-12-10T03:55:09.454Z] 5496.00 IOPS, 21.47 MiB/s [2024-12-10T03:55:10.391Z] 5457.67 IOPS, 21.32 MiB/s [2024-12-10T03:55:11.327Z] 5478.00 IOPS, 21.40 MiB/s [2024-12-10T03:55:12.264Z] 5495.20 IOPS, 21.47 MiB/s [2024-12-10T03:55:13.200Z] 5516.17 IOPS, 21.55 MiB/s [2024-12-10T03:55:14.136Z] 5502.86 IOPS, 21.50 MiB/s [2024-12-10T03:55:15.214Z] 5530.88 IOPS, 21.60 MiB/s [2024-12-10T03:55:16.150Z] 5529.56 IOPS, 21.60 MiB/s [2024-12-10T03:55:16.150Z] 5526.00 IOPS, 21.59 MiB/s 00:18:25.013 Latency(us) 00:18:25.013 [2024-12-10T03:55:16.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.013 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.013 Verification LBA range: start 0x0 length 0x2000 00:18:25.013 TLSTESTn1 : 10.02 5529.18 21.60 0.00 0.00 23112.77 7521.04 23842.62 00:18:25.013 [2024-12-10T03:55:16.150Z] =================================================================================================================== 00:18:25.013 [2024-12-10T03:55:16.150Z] Total : 5529.18 21.60 0.00 0.00 23112.77 7521.04 23842.62 00:18:25.013 { 00:18:25.013 "results": [ 00:18:25.013 { 00:18:25.013 "job": "TLSTESTn1", 00:18:25.013 "core_mask": "0x4", 00:18:25.013 "workload": "verify", 00:18:25.013 "status": "finished", 00:18:25.013 "verify_range": { 00:18:25.013 "start": 0, 00:18:25.013 "length": 8192 00:18:25.013 }, 00:18:25.013 "queue_depth": 128, 00:18:25.013 "io_size": 4096, 00:18:25.013 "runtime": 10.017212, 00:18:25.013 "iops": 5529.18316992792, 00:18:25.013 "mibps": 21.598371757530938, 00:18:25.013 "io_failed": 0, 00:18:25.013 "io_timeout": 0, 00:18:25.013 "avg_latency_us": 23112.765700048232, 00:18:25.013 "min_latency_us": 7521.03619047619, 00:18:25.013 "max_latency_us": 23842.620952380952 00:18:25.013 } 00:18:25.013 ], 00:18:25.013 "core_count": 1 00:18:25.013 } 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 640656 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 640656 ']' 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 640656 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 640656 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 640656' 00:18:25.273 killing process with pid 640656 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 640656 00:18:25.273 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.273 00:18:25.273 Latency(us) 00:18:25.273 [2024-12-10T03:55:16.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.273 [2024-12-10T03:55:16.410Z] =================================================================================================================== 00:18:25.273 [2024-12-10T03:55:16.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 640656 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z32TWrbelL 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z32TWrbelL 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z32TWrbelL 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z32TWrbelL 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=642446 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 642446 /var/tmp/bdevperf.sock 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 642446 ']' 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.273 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.533 [2024-12-10 04:55:16.410011] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:25.533 [2024-12-10 04:55:16.410057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642446 ] 00:18:25.533 [2024-12-10 04:55:16.483050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.533 [2024-12-10 04:55:16.521594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.533 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.533 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.533 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z32TWrbelL 00:18:25.792 04:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.051 [2024-12-10 04:55:16.989772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.051 [2024-12-10 04:55:17.000604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:26.051 [2024-12-10 04:55:17.000977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15603a0 (107): Transport endpoint is not connected 00:18:26.051 [2024-12-10 04:55:17.001971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15603a0 (9): Bad file descriptor 00:18:26.051 [2024-12-10 04:55:17.002973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:26.051 [2024-12-10 04:55:17.002985] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:26.051 [2024-12-10 04:55:17.002992] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:26.051 [2024-12-10 04:55:17.003005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:26.051 request: 00:18:26.051 { 00:18:26.051 "name": "TLSTEST", 00:18:26.051 "trtype": "tcp", 00:18:26.051 "traddr": "10.0.0.2", 00:18:26.051 "adrfam": "ipv4", 00:18:26.051 "trsvcid": "4420", 00:18:26.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.051 "prchk_reftag": false, 00:18:26.051 "prchk_guard": false, 00:18:26.051 "hdgst": false, 00:18:26.051 "ddgst": false, 00:18:26.051 "psk": "key0", 00:18:26.051 "allow_unrecognized_csi": false, 00:18:26.051 "method": "bdev_nvme_attach_controller", 00:18:26.051 "req_id": 1 00:18:26.051 } 00:18:26.051 Got JSON-RPC error response 00:18:26.051 response: 00:18:26.051 { 00:18:26.051 "code": -5, 00:18:26.051 "message": "Input/output error" 00:18:26.051 } 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 642446 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 642446 ']' 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 642446 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642446 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642446' 00:18:26.051 killing process with pid 642446 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 642446 00:18:26.051 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.051 00:18:26.051 Latency(us) 00:18:26.051 [2024-12-10T03:55:17.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.051 [2024-12-10T03:55:17.188Z] =================================================================================================================== 00:18:26.051 [2024-12-10T03:55:17.188Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.051 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 642446 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.E3v8zS9Dgy 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.E3v8zS9Dgy 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.E3v8zS9Dgy 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:26.310 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.E3v8zS9Dgy 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=642670 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 642670 /var/tmp/bdevperf.sock 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 642670 ']' 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.311 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.311 [2024-12-10 04:55:17.268645] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:26.311 [2024-12-10 04:55:17.268692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642670 ] 00:18:26.311 [2024-12-10 04:55:17.342633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.311 [2024-12-10 04:55:17.384065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.570 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.570 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.570 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.E3v8zS9Dgy 00:18:26.570 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:26.829 [2024-12-10 04:55:17.827613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.829 [2024-12-10 04:55:17.832617] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:26.829 [2024-12-10 04:55:17.832639] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:26.829 [2024-12-10 04:55:17.832662] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:26.829 [2024-12-10 04:55:17.832874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12163a0 (107): Transport endpoint is not connected 00:18:26.829 [2024-12-10 04:55:17.833867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12163a0 (9): Bad file descriptor 00:18:26.829 [2024-12-10 04:55:17.834868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:26.829 [2024-12-10 04:55:17.834877] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:26.829 [2024-12-10 04:55:17.834887] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:26.829 [2024-12-10 04:55:17.834897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:26.829 request: 00:18:26.829 { 00:18:26.829 "name": "TLSTEST", 00:18:26.829 "trtype": "tcp", 00:18:26.829 "traddr": "10.0.0.2", 00:18:26.829 "adrfam": "ipv4", 00:18:26.829 "trsvcid": "4420", 00:18:26.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.829 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:26.829 "prchk_reftag": false, 00:18:26.829 "prchk_guard": false, 00:18:26.829 "hdgst": false, 00:18:26.829 "ddgst": false, 00:18:26.829 "psk": "key0", 00:18:26.829 "allow_unrecognized_csi": false, 00:18:26.829 "method": "bdev_nvme_attach_controller", 00:18:26.829 "req_id": 1 00:18:26.829 } 00:18:26.829 Got JSON-RPC error response 00:18:26.829 response: 00:18:26.829 { 00:18:26.829 "code": -5, 00:18:26.829 "message": "Input/output error" 00:18:26.829 } 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 642670 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 642670 ']' 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 642670 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642670 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642670' 00:18:26.829 killing process with pid 642670 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 642670 00:18:26.829 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.829 00:18:26.829 Latency(us) 00:18:26.829 [2024-12-10T03:55:17.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.829 [2024-12-10T03:55:17.966Z] =================================================================================================================== 00:18:26.829 [2024-12-10T03:55:17.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.829 04:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 642670 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.E3v8zS9Dgy 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.E3v8zS9Dgy 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.E3v8zS9Dgy 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.E3v8zS9Dgy 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=642694 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 642694 /var/tmp/bdevperf.sock 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 642694 ']' 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.089 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.089 [2024-12-10 04:55:18.096901] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:27.089 [2024-12-10 04:55:18.096947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642694 ] 00:18:27.089 [2024-12-10 04:55:18.163106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.089 [2024-12-10 04:55:18.201916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.347 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.347 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.347 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.E3v8zS9Dgy 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.605 [2024-12-10 04:55:18.670174] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.605 [2024-12-10 04:55:18.674775] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:27.605 [2024-12-10 04:55:18.674797] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:27.605 [2024-12-10 04:55:18.674821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:27.605 [2024-12-10 04:55:18.675447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd23a0 (107): Transport endpoint is not connected 00:18:27.605 [2024-12-10 04:55:18.676438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd23a0 (9): Bad file descriptor 00:18:27.605 [2024-12-10 04:55:18.677440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:27.605 [2024-12-10 04:55:18.677451] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:27.605 [2024-12-10 04:55:18.677459] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:27.605 [2024-12-10 04:55:18.677471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:27.605 request: 00:18:27.605 { 00:18:27.605 "name": "TLSTEST", 00:18:27.605 "trtype": "tcp", 00:18:27.605 "traddr": "10.0.0.2", 00:18:27.605 "adrfam": "ipv4", 00:18:27.605 "trsvcid": "4420", 00:18:27.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:27.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.605 "prchk_reftag": false, 00:18:27.605 "prchk_guard": false, 00:18:27.605 "hdgst": false, 00:18:27.605 "ddgst": false, 00:18:27.605 "psk": "key0", 00:18:27.605 "allow_unrecognized_csi": false, 00:18:27.605 "method": "bdev_nvme_attach_controller", 00:18:27.605 "req_id": 1 00:18:27.605 } 00:18:27.605 Got JSON-RPC error response 00:18:27.605 response: 00:18:27.605 { 00:18:27.605 "code": -5, 00:18:27.605 "message": "Input/output error" 00:18:27.605 } 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 642694 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 642694 ']' 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 642694 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.605 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642694 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642694' 00:18:27.864 killing process with pid 642694 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 642694 00:18:27.864 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.864 00:18:27.864 Latency(us) 00:18:27.864 [2024-12-10T03:55:19.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.864 [2024-12-10T03:55:19.001Z] =================================================================================================================== 00:18:27.864 [2024-12-10T03:55:19.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 642694 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=642919 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 642919 /var/tmp/bdevperf.sock 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 642919 ']' 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.864 04:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.864 [2024-12-10 04:55:18.947021] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:27.864 [2024-12-10 04:55:18.947068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642919 ] 00:18:28.123 [2024-12-10 04:55:19.017800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.123 [2024-12-10 04:55:19.054102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.123 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.123 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.123 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:28.382 [2024-12-10 04:55:19.312941] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:28.382 [2024-12-10 04:55:19.312972] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:28.382 request: 00:18:28.382 { 00:18:28.382 "name": "key0", 00:18:28.382 "path": "", 00:18:28.382 "method": "keyring_file_add_key", 00:18:28.382 "req_id": 1 00:18:28.382 } 00:18:28.382 Got JSON-RPC error response 00:18:28.382 response: 00:18:28.382 { 00:18:28.382 "code": -1, 00:18:28.382 "message": "Operation not permitted" 00:18:28.382 } 00:18:28.382 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.382 [2024-12-10 04:55:19.505526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.382 [2024-12-10 04:55:19.505550] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:28.382 request: 00:18:28.382 { 00:18:28.382 "name": "TLSTEST", 00:18:28.382 "trtype": "tcp", 00:18:28.382 "traddr": "10.0.0.2", 00:18:28.382 "adrfam": "ipv4", 00:18:28.382 "trsvcid": "4420", 00:18:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.382 "prchk_reftag": false, 00:18:28.382 "prchk_guard": false, 00:18:28.382 "hdgst": false, 00:18:28.382 "ddgst": false, 00:18:28.382 "psk": "key0", 00:18:28.382 "allow_unrecognized_csi": false, 00:18:28.382 "method": "bdev_nvme_attach_controller", 00:18:28.382 "req_id": 1 00:18:28.382 } 00:18:28.382 Got JSON-RPC error response 00:18:28.382 response: 00:18:28.382 { 00:18:28.382 "code": -126, 00:18:28.382 "message": "Required key not available" 00:18:28.382 } 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 642919 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 642919 ']' 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 642919 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642919 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642919' 00:18:28.641 killing process with pid 642919 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 642919 00:18:28.641 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.641 00:18:28.641 Latency(us) 00:18:28.641 [2024-12-10T03:55:19.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.641 [2024-12-10T03:55:19.778Z] =================================================================================================================== 00:18:28.641 [2024-12-10T03:55:19.778Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 642919 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 638374 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 638374 ']' 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 638374 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.641 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638374 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638374' 00:18:28.900 killing process with pid 638374 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 638374 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 638374 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Yw4nntCAEq 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:28.900 04:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Yw4nntCAEq 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=643160 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 643160 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 643160 ']' 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.900 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.159 [2024-12-10 04:55:20.053583] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:29.159 [2024-12-10 04:55:20.053633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.159 [2024-12-10 04:55:20.131043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.159 [2024-12-10 04:55:20.168613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.159 [2024-12-10 04:55:20.168648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.159 [2024-12-10 04:55:20.168654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.159 [2024-12-10 04:55:20.168662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.159 [2024-12-10 04:55:20.168668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.159 [2024-12-10 04:55:20.169177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.159 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.159 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.159 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.159 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.159 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.418 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.418 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Yw4nntCAEq 00:18:29.418 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Yw4nntCAEq 00:18:29.418 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.418 [2024-12-10 04:55:20.477447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.418 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:29.676 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:29.934 [2024-12-10 04:55:20.878469] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.934 [2024-12-10 04:55:20.878682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.934 04:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.193 malloc0 00:18:30.193 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.193 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:18:30.452 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yw4nntCAEq 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Yw4nntCAEq 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=643408 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 643408 /var/tmp/bdevperf.sock 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 643408 ']' 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.711 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 [2024-12-10 04:55:21.735372] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:30.711 [2024-12-10 04:55:21.735421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643408 ] 00:18:30.711 [2024-12-10 04:55:21.811650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.970 [2024-12-10 04:55:21.851562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.970 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.970 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.970 04:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:18:31.228 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.228 [2024-12-10 04:55:22.307841] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.487 TLSTESTn1 00:18:31.487 04:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.487 Running I/O for 10 seconds... 00:18:33.800 5519.00 IOPS, 21.56 MiB/s [2024-12-10T03:55:25.503Z] 5561.50 IOPS, 21.72 MiB/s [2024-12-10T03:55:26.879Z] 5589.67 IOPS, 21.83 MiB/s [2024-12-10T03:55:27.814Z] 5602.25 IOPS, 21.88 MiB/s [2024-12-10T03:55:28.750Z] 5568.00 IOPS, 21.75 MiB/s [2024-12-10T03:55:29.686Z] 5570.83 IOPS, 21.76 MiB/s [2024-12-10T03:55:30.622Z] 5556.86 IOPS, 21.71 MiB/s [2024-12-10T03:55:31.558Z] 5538.12 IOPS, 21.63 MiB/s [2024-12-10T03:55:32.936Z] 5552.11 IOPS, 21.69 MiB/s [2024-12-10T03:55:32.936Z] 5556.70 IOPS, 21.71 MiB/s 00:18:41.799 Latency(us) 00:18:41.799 [2024-12-10T03:55:32.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.799 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.799 Verification LBA range: start 0x0 length 0x2000 00:18:41.799 TLSTESTn1 : 10.01 5562.50 21.73 0.00 0.00 22977.86 4774.77 25465.42 00:18:41.799 [2024-12-10T03:55:32.936Z] =================================================================================================================== 00:18:41.799 [2024-12-10T03:55:32.936Z] Total : 5562.50 21.73 0.00 0.00 22977.86 4774.77 25465.42 00:18:41.799 { 00:18:41.799 "results": [ 00:18:41.799 { 00:18:41.799 "job": "TLSTESTn1", 00:18:41.799 "core_mask": "0x4", 00:18:41.799 "workload": "verify", 00:18:41.799 "status": "finished", 00:18:41.799 "verify_range": { 00:18:41.799 "start": 0, 00:18:41.799 "length": 8192 00:18:41.799 }, 00:18:41.799 "queue_depth": 128, 00:18:41.799 "io_size": 4096, 00:18:41.799 "runtime": 10.012225, 00:18:41.799 "iops": 5562.499843940783, 00:18:41.799 "mibps": 21.728515015393683, 00:18:41.799 "io_failed": 0, 00:18:41.799 "io_timeout": 0, 00:18:41.799 "avg_latency_us": 22977.85812555737, 00:18:41.799 "min_latency_us": 4774.765714285714, 00:18:41.799 "max_latency_us": 25465.417142857143 00:18:41.799 } 00:18:41.799 ], 00:18:41.799 "core_count": 1 00:18:41.799 } 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 643408 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 643408 ']' 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 643408 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 643408 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 643408' 00:18:41.799 killing process with pid 643408 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 643408 00:18:41.799 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.799 00:18:41.799 Latency(us) 00:18:41.799 [2024-12-10T03:55:32.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.799 [2024-12-10T03:55:32.936Z] =================================================================================================================== 00:18:41.799 [2024-12-10T03:55:32.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 643408 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Yw4nntCAEq 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yw4nntCAEq 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yw4nntCAEq 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Yw4nntCAEq 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Yw4nntCAEq 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=645194 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 645194 /var/tmp/bdevperf.sock 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 645194 ']' 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.799 04:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.799 [2024-12-10 04:55:32.814722] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:41.799 [2024-12-10 04:55:32.814771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645194 ] 00:18:41.799 [2024-12-10 04:55:32.880664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.799 [2024-12-10 04:55:32.916889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.058 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.058 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.058 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:18:42.058 [2024-12-10 04:55:33.184574] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Yw4nntCAEq': 0100666 00:18:42.058 [2024-12-10 04:55:33.184606] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:42.058 request: 00:18:42.058 { 00:18:42.058 "name": "key0", 00:18:42.058 "path": "/tmp/tmp.Yw4nntCAEq", 00:18:42.058 "method": "keyring_file_add_key", 00:18:42.058 "req_id": 1 00:18:42.058 } 00:18:42.058 Got JSON-RPC error response 00:18:42.058 response: 00:18:42.058 { 00:18:42.058 "code": -1, 00:18:42.058 "message": "Operation not permitted" 00:18:42.058 } 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.318 [2024-12-10 04:55:33.381160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.318 [2024-12-10 04:55:33.381186] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:42.318 request: 00:18:42.318 { 00:18:42.318 "name": "TLSTEST", 00:18:42.318 "trtype": "tcp", 00:18:42.318 "traddr": "10.0.0.2", 00:18:42.318 "adrfam": "ipv4", 00:18:42.318 "trsvcid": "4420", 00:18:42.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.318 "prchk_reftag": false, 00:18:42.318 "prchk_guard": false, 00:18:42.318 "hdgst": false, 00:18:42.318 "ddgst": false, 00:18:42.318 "psk": "key0", 00:18:42.318 "allow_unrecognized_csi": false, 00:18:42.318 "method": "bdev_nvme_attach_controller", 00:18:42.318 "req_id": 1 00:18:42.318 } 00:18:42.318 Got JSON-RPC error response 00:18:42.318 response: 00:18:42.318 { 00:18:42.318 "code": -126, 00:18:42.318 "message": "Required key not available" 00:18:42.318 } 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 645194 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 645194 ']' 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 645194 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.318 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645194 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645194' 00:18:42.577 killing process with pid 645194 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 645194 00:18:42.577 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.577 00:18:42.577 Latency(us) 00:18:42.577 [2024-12-10T03:55:33.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.577 [2024-12-10T03:55:33.714Z] =================================================================================================================== 00:18:42.577 [2024-12-10T03:55:33.714Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 645194 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 643160 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 643160 ']' 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 643160 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 643160 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 643160' 00:18:42.577 killing process with pid 643160 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 643160 00:18:42.577 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 643160 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=645397 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 645397 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 645397 ']' 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.836 04:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.836 [2024-12-10 04:55:33.885346] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:42.836 [2024-12-10 04:55:33.885394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.836 [2024-12-10 04:55:33.962615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.095 [2024-12-10 04:55:33.998659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.095 [2024-12-10 04:55:33.998693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.095 [2024-12-10 04:55:33.998701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.095 [2024-12-10 04:55:33.998708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.095 [2024-12-10 04:55:33.998717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.095 [2024-12-10 04:55:33.999221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Yw4nntCAEq 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Yw4nntCAEq 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Yw4nntCAEq 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Yw4nntCAEq 00:18:43.095 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.354 [2024-12-10 04:55:34.305951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.354 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.613 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.613 [2024-12-10 04:55:34.698953] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.613 [2024-12-10 04:55:34.699175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.613 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:43.871 malloc0 00:18:43.871 04:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:44.130 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:18:44.387 [2024-12-10 04:55:35.312459] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Yw4nntCAEq': 0100666 00:18:44.387 [2024-12-10 04:55:35.312483] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:44.387 request: 00:18:44.387 { 00:18:44.387 "name": "key0", 00:18:44.387 "path": "/tmp/tmp.Yw4nntCAEq", 00:18:44.387 "method": "keyring_file_add_key", 00:18:44.387 "req_id": 1 00:18:44.387 } 00:18:44.387 Got JSON-RPC error response 00:18:44.387 response: 00:18:44.387 { 00:18:44.387 "code": -1, 00:18:44.387 "message": "Operation not permitted" 00:18:44.387 } 00:18:44.387 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.387 [2024-12-10 04:55:35.504978] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:44.387 [2024-12-10 04:55:35.505010] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:44.387 request: 00:18:44.387 { 00:18:44.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.387 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.387 "psk": "key0", 00:18:44.387 "method": "nvmf_subsystem_add_host", 00:18:44.387 "req_id": 1 00:18:44.387 } 00:18:44.387 Got JSON-RPC error response 00:18:44.387 response: 00:18:44.387 { 00:18:44.387 "code": -32603, 00:18:44.387 "message": "Internal error" 00:18:44.387 } 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 645397 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 645397 ']' 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 645397 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645397 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645397' 00:18:44.646 killing process with pid 645397 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 645397 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 645397 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Yw4nntCAEq 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=645694 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 645694 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 645694 ']' 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.646 04:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.905 [2024-12-10 04:55:35.808381] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:44.905 [2024-12-10 04:55:35.808427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.905 [2024-12-10 04:55:35.885230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.905 [2024-12-10 04:55:35.923423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.905 [2024-12-10 04:55:35.923456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.905 [2024-12-10 04:55:35.923464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.905 [2024-12-10 04:55:35.923469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.905 [2024-12-10 04:55:35.923474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.905 [2024-12-10 04:55:35.923976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.905 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.905 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.905 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.905 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.905 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.164 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.164 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Yw4nntCAEq 00:18:45.164 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Yw4nntCAEq 00:18:45.164 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.164 [2024-12-10 04:55:36.218668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.164 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:45.423 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:45.682 [2024-12-10 04:55:36.627726] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.682 [2024-12-10 04:55:36.627941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.682 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:45.940 malloc0 00:18:45.940 04:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.940 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:18:46.199 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=645946 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 645946 /var/tmp/bdevperf.sock 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 645946 ']' 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.458 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.458 [2024-12-10 04:55:37.480424] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:46.458 [2024-12-10 04:55:37.480469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645946 ] 00:18:46.458 [2024-12-10 04:55:37.554339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.717 [2024-12-10 04:55:37.593982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.717 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.717 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.717 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:18:46.976 04:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.976 [2024-12-10 04:55:38.082074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.234 TLSTESTn1 00:18:47.234 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:47.493 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:47.493 "subsystems": [ 00:18:47.493 { 00:18:47.493 "subsystem": "keyring", 00:18:47.493 "config": [ 00:18:47.493 { 00:18:47.493 "method": "keyring_file_add_key", 00:18:47.493 "params": { 00:18:47.493 "name": "key0", 00:18:47.493 "path": "/tmp/tmp.Yw4nntCAEq" 00:18:47.493 } 00:18:47.493 } 00:18:47.493 ] 00:18:47.493 }, 00:18:47.493 { 00:18:47.493 "subsystem": "iobuf", 00:18:47.493 "config": [ 00:18:47.493 { 00:18:47.493 "method": "iobuf_set_options", 00:18:47.493 "params": { 00:18:47.493 "small_pool_count": 8192, 00:18:47.493 "large_pool_count": 1024, 00:18:47.494 "small_bufsize": 8192, 00:18:47.494 "large_bufsize": 135168, 00:18:47.494 "enable_numa": false 00:18:47.494 } 00:18:47.494 } 00:18:47.494 ] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "sock", 00:18:47.494 "config": [ 00:18:47.494 { 00:18:47.494 "method": "sock_set_default_impl", 00:18:47.494 "params": { 00:18:47.494 "impl_name": "posix" 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "sock_impl_set_options", 00:18:47.494 "params": { 00:18:47.494 "impl_name": "ssl", 00:18:47.494 "recv_buf_size": 4096, 00:18:47.494 "send_buf_size": 4096, 00:18:47.494 "enable_recv_pipe": true, 00:18:47.494 "enable_quickack": false, 00:18:47.494 "enable_placement_id": 0, 00:18:47.494 "enable_zerocopy_send_server": true, 00:18:47.494 "enable_zerocopy_send_client": false, 00:18:47.494 "zerocopy_threshold": 0, 00:18:47.494 "tls_version": 0, 00:18:47.494 "enable_ktls": false 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "sock_impl_set_options", 00:18:47.494 "params": { 00:18:47.494 "impl_name": "posix", 00:18:47.494 "recv_buf_size": 2097152, 00:18:47.494 "send_buf_size": 2097152, 00:18:47.494 "enable_recv_pipe": true, 00:18:47.494 "enable_quickack": false, 00:18:47.494 "enable_placement_id": 0, 00:18:47.494 "enable_zerocopy_send_server": true, 00:18:47.494 "enable_zerocopy_send_client": false, 00:18:47.494 "zerocopy_threshold": 0, 00:18:47.494 "tls_version": 0, 00:18:47.494 "enable_ktls": false 00:18:47.494 } 00:18:47.494 } 00:18:47.494 ] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "vmd", 00:18:47.494 "config": [] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "accel", 00:18:47.494 "config": [ 00:18:47.494 { 00:18:47.494 "method": "accel_set_options", 00:18:47.494 "params": { 00:18:47.494 "small_cache_size": 128, 00:18:47.494 "large_cache_size": 16, 00:18:47.494 "task_count": 2048, 00:18:47.494 "sequence_count": 2048, 00:18:47.494 "buf_count": 2048 00:18:47.494 } 00:18:47.494 } 00:18:47.494 ] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "bdev", 00:18:47.494 "config": [ 00:18:47.494 { 00:18:47.494 "method": "bdev_set_options", 00:18:47.494 "params": { 00:18:47.494 "bdev_io_pool_size": 65535, 00:18:47.494 "bdev_io_cache_size": 256, 00:18:47.494 "bdev_auto_examine": true, 00:18:47.494 "iobuf_small_cache_size": 128, 00:18:47.494 "iobuf_large_cache_size": 16 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "bdev_raid_set_options", 00:18:47.494 "params": { 00:18:47.494 "process_window_size_kb": 1024, 00:18:47.494 "process_max_bandwidth_mb_sec": 0 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "bdev_iscsi_set_options", 00:18:47.494 "params": { 00:18:47.494 "timeout_sec": 30 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "bdev_nvme_set_options", 00:18:47.494 "params": { 00:18:47.494 "action_on_timeout": "none", 00:18:47.494 "timeout_us": 0, 00:18:47.494 "timeout_admin_us": 0, 00:18:47.494 "keep_alive_timeout_ms": 10000, 00:18:47.494 "arbitration_burst": 0, 00:18:47.494 "low_priority_weight": 0, 00:18:47.494 "medium_priority_weight": 0, 00:18:47.494 "high_priority_weight": 0, 00:18:47.494 "nvme_adminq_poll_period_us": 10000, 00:18:47.494 "nvme_ioq_poll_period_us": 0, 00:18:47.494 "io_queue_requests": 0, 00:18:47.494 "delay_cmd_submit": true, 00:18:47.494 "transport_retry_count": 4, 00:18:47.494 "bdev_retry_count": 3, 00:18:47.494 "transport_ack_timeout": 0, 00:18:47.494 "ctrlr_loss_timeout_sec": 0, 00:18:47.494 "reconnect_delay_sec": 0, 00:18:47.494 "fast_io_fail_timeout_sec": 0, 00:18:47.494 "disable_auto_failback": false, 00:18:47.494 "generate_uuids": false, 00:18:47.494 "transport_tos": 0, 00:18:47.494 "nvme_error_stat": false, 00:18:47.494 "rdma_srq_size": 0, 00:18:47.494 "io_path_stat": false, 00:18:47.494 "allow_accel_sequence": false, 00:18:47.494 "rdma_max_cq_size": 0, 00:18:47.494 "rdma_cm_event_timeout_ms": 0, 00:18:47.494 "dhchap_digests": [ 00:18:47.494 "sha256", 00:18:47.494 "sha384", 00:18:47.494 "sha512" 00:18:47.494 ], 00:18:47.494 "dhchap_dhgroups": [ 00:18:47.494 "null", 00:18:47.494 "ffdhe2048", 00:18:47.494 "ffdhe3072", 00:18:47.494 "ffdhe4096", 00:18:47.494 "ffdhe6144", 00:18:47.494 "ffdhe8192" 00:18:47.494 ] 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "bdev_nvme_set_hotplug", 00:18:47.494 "params": { 00:18:47.494 "period_us": 100000, 00:18:47.494 "enable": false 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "bdev_malloc_create", 00:18:47.494 "params": { 00:18:47.494 "name": "malloc0", 00:18:47.494 "num_blocks": 8192, 00:18:47.494 "block_size": 4096, 00:18:47.494 "physical_block_size": 4096, 00:18:47.494 "uuid": "1f529d02-5ab7-4bba-9865-9945f11c954f", 00:18:47.494 "optimal_io_boundary": 0, 00:18:47.494 "md_size": 0, 00:18:47.494 "dif_type": 0, 00:18:47.494 "dif_is_head_of_md": false, 00:18:47.494 "dif_pi_format": 0 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "bdev_wait_for_examine" 00:18:47.494 } 00:18:47.494 ] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "nbd", 00:18:47.494 "config": [] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "scheduler", 00:18:47.494 "config": [ 00:18:47.494 { 00:18:47.494 "method": "framework_set_scheduler", 00:18:47.494 "params": { 00:18:47.494 "name": "static" 00:18:47.494 } 00:18:47.494 } 00:18:47.494 ] 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "subsystem": "nvmf", 00:18:47.494 "config": [ 00:18:47.494 { 00:18:47.494 "method": "nvmf_set_config", 00:18:47.494 "params": { 00:18:47.494 "discovery_filter": "match_any", 00:18:47.494 "admin_cmd_passthru": { 00:18:47.494 "identify_ctrlr": false 00:18:47.494 }, 00:18:47.494 "dhchap_digests": [ 00:18:47.494 "sha256", 00:18:47.494 "sha384", 00:18:47.494 "sha512" 00:18:47.494 ], 00:18:47.494 "dhchap_dhgroups": [ 00:18:47.494 "null", 00:18:47.494 "ffdhe2048", 00:18:47.494 "ffdhe3072", 00:18:47.494 "ffdhe4096", 00:18:47.494 "ffdhe6144", 00:18:47.494 "ffdhe8192" 00:18:47.494 ] 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "nvmf_set_max_subsystems", 00:18:47.494 "params": { 00:18:47.494 "max_subsystems": 1024 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "nvmf_set_crdt", 00:18:47.494 "params": { 00:18:47.494 "crdt1": 0, 00:18:47.494 "crdt2": 0, 00:18:47.494 "crdt3": 0 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "nvmf_create_transport", 00:18:47.494 "params": { 00:18:47.494 "trtype": "TCP", 00:18:47.494 "max_queue_depth": 128, 00:18:47.494 "max_io_qpairs_per_ctrlr": 127, 00:18:47.494 "in_capsule_data_size": 4096, 00:18:47.494 "max_io_size": 131072, 00:18:47.494 "io_unit_size": 131072, 00:18:47.494 "max_aq_depth": 128, 00:18:47.494 "num_shared_buffers": 511, 00:18:47.494 "buf_cache_size": 4294967295, 00:18:47.494 "dif_insert_or_strip": false, 00:18:47.494 "zcopy": false, 00:18:47.494 "c2h_success": false, 00:18:47.494 "sock_priority": 0, 00:18:47.494 "abort_timeout_sec": 1, 00:18:47.494 "ack_timeout": 0, 00:18:47.494 "data_wr_pool_size": 0 00:18:47.494 } 00:18:47.494 }, 00:18:47.494 { 00:18:47.494 "method": "nvmf_create_subsystem", 00:18:47.494 "params": { 00:18:47.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.494 "allow_any_host": false, 00:18:47.494 "serial_number": "SPDK00000000000001", 00:18:47.495 "model_number": "SPDK bdev Controller", 00:18:47.495 "max_namespaces": 10, 00:18:47.495 "min_cntlid": 1, 00:18:47.495 "max_cntlid": 65519, 00:18:47.495 "ana_reporting": false 00:18:47.495 } 00:18:47.495 }, 00:18:47.495 { 00:18:47.495 "method": "nvmf_subsystem_add_host", 00:18:47.495 "params": { 00:18:47.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.495 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.495 "psk": "key0" 00:18:47.495 } 00:18:47.495 }, 00:18:47.495 { 00:18:47.495 "method": "nvmf_subsystem_add_ns", 00:18:47.495 "params": { 00:18:47.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.495 "namespace": { 00:18:47.495 "nsid": 1, 00:18:47.495 "bdev_name": "malloc0", 00:18:47.495 "nguid": "1F529D025AB74BBA98659945F11C954F", 00:18:47.495 "uuid": "1f529d02-5ab7-4bba-9865-9945f11c954f", 00:18:47.495 "no_auto_visible": false 00:18:47.495 } 00:18:47.495 } 00:18:47.495 }, 00:18:47.495 { 00:18:47.495 "method": "nvmf_subsystem_add_listener", 00:18:47.495 "params": { 00:18:47.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.495 "listen_address": { 00:18:47.495 "trtype": "TCP", 00:18:47.495 "adrfam": "IPv4", 00:18:47.495 "traddr": "10.0.0.2", 00:18:47.495 "trsvcid": "4420" 00:18:47.495 }, 00:18:47.495 "secure_channel": true 00:18:47.495 } 00:18:47.495 } 00:18:47.495 ] 00:18:47.495 } 00:18:47.495 ] 00:18:47.495 }' 00:18:47.495 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:47.754 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:47.754 "subsystems": [ 00:18:47.754 { 00:18:47.754 "subsystem": "keyring", 00:18:47.754 "config": [ 00:18:47.754 { 00:18:47.754 "method": "keyring_file_add_key", 00:18:47.754 "params": { 00:18:47.754 "name": "key0", 00:18:47.754 "path": "/tmp/tmp.Yw4nntCAEq" 00:18:47.754 } 00:18:47.754 } 00:18:47.754 ] 00:18:47.754 }, 00:18:47.754 { 00:18:47.754 "subsystem": "iobuf", 00:18:47.754 "config": [ 00:18:47.754 { 00:18:47.754 "method": "iobuf_set_options", 00:18:47.754 "params": { 00:18:47.754 "small_pool_count": 8192, 00:18:47.754 "large_pool_count": 1024, 00:18:47.754 "small_bufsize": 8192, 00:18:47.754 "large_bufsize": 135168, 00:18:47.754 "enable_numa": false 00:18:47.754 } 00:18:47.754 } 00:18:47.754 ] 00:18:47.754 }, 00:18:47.754 { 00:18:47.754 "subsystem": "sock", 00:18:47.754 "config": [ 00:18:47.754 { 00:18:47.754 "method": "sock_set_default_impl", 00:18:47.754 "params": { 00:18:47.754 "impl_name": "posix" 00:18:47.754 } 00:18:47.754 }, 00:18:47.754 { 00:18:47.754 "method": "sock_impl_set_options", 00:18:47.754 "params": { 00:18:47.754 "impl_name": "ssl", 00:18:47.754 "recv_buf_size": 4096, 00:18:47.754 "send_buf_size": 4096, 00:18:47.754 "enable_recv_pipe": true, 00:18:47.754 "enable_quickack": false, 00:18:47.754 "enable_placement_id": 0, 00:18:47.754 "enable_zerocopy_send_server": true, 00:18:47.754 "enable_zerocopy_send_client": false, 00:18:47.754 "zerocopy_threshold": 0, 00:18:47.754 "tls_version": 0, 00:18:47.755 "enable_ktls": false 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "sock_impl_set_options", 00:18:47.755 "params": { 00:18:47.755 "impl_name": "posix", 00:18:47.755 "recv_buf_size": 2097152, 00:18:47.755 "send_buf_size": 2097152, 00:18:47.755 "enable_recv_pipe": true, 00:18:47.755 "enable_quickack": false, 00:18:47.755 "enable_placement_id": 0, 00:18:47.755 "enable_zerocopy_send_server": true, 00:18:47.755 "enable_zerocopy_send_client": false, 00:18:47.755 "zerocopy_threshold": 0, 00:18:47.755 "tls_version": 0, 00:18:47.755 "enable_ktls": false 00:18:47.755 } 00:18:47.755 } 00:18:47.755 ] 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "subsystem": "vmd", 00:18:47.755 "config": [] 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "subsystem": "accel", 00:18:47.755 "config": [ 00:18:47.755 { 00:18:47.755 "method": "accel_set_options", 00:18:47.755 "params": { 00:18:47.755 "small_cache_size": 128, 00:18:47.755 "large_cache_size": 16, 00:18:47.755 "task_count": 2048, 00:18:47.755 "sequence_count": 2048, 00:18:47.755 "buf_count": 2048 00:18:47.755 } 00:18:47.755 } 00:18:47.755 ] 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "subsystem": "bdev", 00:18:47.755 "config": [ 00:18:47.755 { 00:18:47.755 "method": "bdev_set_options", 00:18:47.755 "params": { 00:18:47.755 "bdev_io_pool_size": 65535, 00:18:47.755 "bdev_io_cache_size": 256, 00:18:47.755 "bdev_auto_examine": true, 00:18:47.755 "iobuf_small_cache_size": 128, 00:18:47.755 "iobuf_large_cache_size": 16 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "bdev_raid_set_options", 00:18:47.755 "params": { 00:18:47.755 "process_window_size_kb": 1024, 00:18:47.755 "process_max_bandwidth_mb_sec": 0 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "bdev_iscsi_set_options", 00:18:47.755 "params": { 00:18:47.755 "timeout_sec": 30 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "bdev_nvme_set_options", 00:18:47.755 "params": { 00:18:47.755 "action_on_timeout": "none", 00:18:47.755 "timeout_us": 0, 00:18:47.755 "timeout_admin_us": 0, 00:18:47.755 "keep_alive_timeout_ms": 10000, 00:18:47.755 "arbitration_burst": 0, 00:18:47.755 "low_priority_weight": 0, 00:18:47.755 "medium_priority_weight": 0, 00:18:47.755 "high_priority_weight": 0, 00:18:47.755 "nvme_adminq_poll_period_us": 10000, 00:18:47.755 "nvme_ioq_poll_period_us": 0, 00:18:47.755 "io_queue_requests": 512, 00:18:47.755 "delay_cmd_submit": true, 00:18:47.755 "transport_retry_count": 4, 00:18:47.755 "bdev_retry_count": 3, 00:18:47.755 "transport_ack_timeout": 0, 00:18:47.755 "ctrlr_loss_timeout_sec": 0, 00:18:47.755 "reconnect_delay_sec": 0, 00:18:47.755 "fast_io_fail_timeout_sec": 0, 00:18:47.755 "disable_auto_failback": false, 00:18:47.755 "generate_uuids": false, 00:18:47.755 "transport_tos": 0, 00:18:47.755 "nvme_error_stat": false, 00:18:47.755 "rdma_srq_size": 0, 00:18:47.755 "io_path_stat": false, 00:18:47.755 "allow_accel_sequence": false, 00:18:47.755 "rdma_max_cq_size": 0, 00:18:47.755 "rdma_cm_event_timeout_ms": 0, 00:18:47.755 "dhchap_digests": [ 00:18:47.755 "sha256", 00:18:47.755 "sha384", 00:18:47.755 "sha512" 00:18:47.755 ], 00:18:47.755 "dhchap_dhgroups": [ 00:18:47.755 "null", 00:18:47.755 "ffdhe2048", 00:18:47.755 "ffdhe3072", 00:18:47.755 "ffdhe4096", 00:18:47.755 "ffdhe6144", 00:18:47.755 "ffdhe8192" 00:18:47.755 ] 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "bdev_nvme_attach_controller", 00:18:47.755 "params": { 00:18:47.755 "name": "TLSTEST", 00:18:47.755 "trtype": "TCP", 00:18:47.755 "adrfam": "IPv4", 00:18:47.755 "traddr": "10.0.0.2", 00:18:47.755 "trsvcid": "4420", 00:18:47.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.755 "prchk_reftag": false, 00:18:47.755 "prchk_guard": false, 00:18:47.755 "ctrlr_loss_timeout_sec": 0, 00:18:47.755 "reconnect_delay_sec": 0, 00:18:47.755 "fast_io_fail_timeout_sec": 0, 00:18:47.755 "psk": "key0", 00:18:47.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.755 "hdgst": false, 00:18:47.755 "ddgst": false, 00:18:47.755 "multipath": "multipath" 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "bdev_nvme_set_hotplug", 00:18:47.755 "params": { 00:18:47.755 "period_us": 100000, 00:18:47.755 "enable": false 00:18:47.755 } 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "method": "bdev_wait_for_examine" 00:18:47.755 } 00:18:47.755 ] 00:18:47.755 }, 00:18:47.755 { 00:18:47.755 "subsystem": "nbd", 00:18:47.755 "config": [] 00:18:47.755 } 00:18:47.755 ] 00:18:47.755 }' 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 645946 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 645946 ']' 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 645946 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645946 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645946' 00:18:47.755 killing process with pid 645946 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 645946 00:18:47.755 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.755 00:18:47.755 Latency(us) 00:18:47.755 [2024-12-10T03:55:38.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.755 [2024-12-10T03:55:38.892Z] =================================================================================================================== 00:18:47.755 [2024-12-10T03:55:38.892Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.755 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 645946 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 645694 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 645694 ']' 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 645694 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645694 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645694' 00:18:48.015 killing process with pid 645694 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 645694 00:18:48.015 04:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 645694 00:18:48.015 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:48.015 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.015 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.015 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.015 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:48.015 "subsystems": [ 00:18:48.015 { 00:18:48.015 "subsystem": "keyring", 00:18:48.015 "config": [ 00:18:48.015 { 00:18:48.015 "method": "keyring_file_add_key", 00:18:48.015 "params": { 00:18:48.015 "name": "key0", 00:18:48.015 "path": "/tmp/tmp.Yw4nntCAEq" 00:18:48.015 } 00:18:48.015 } 00:18:48.015 ] 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "subsystem": "iobuf", 00:18:48.015 "config": [ 00:18:48.015 { 00:18:48.015 "method": "iobuf_set_options", 00:18:48.015 "params": { 00:18:48.015 "small_pool_count": 8192, 00:18:48.015 "large_pool_count": 1024, 00:18:48.015 "small_bufsize": 8192, 00:18:48.015 "large_bufsize": 135168, 00:18:48.015 "enable_numa": false 00:18:48.015 } 00:18:48.015 } 00:18:48.015 ] 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "subsystem": "sock", 00:18:48.015 "config": [ 00:18:48.015 { 00:18:48.015 "method": "sock_set_default_impl", 00:18:48.015 "params": { 00:18:48.015 "impl_name": "posix" 00:18:48.015 } 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "method": "sock_impl_set_options", 00:18:48.015 "params": { 00:18:48.015 "impl_name": "ssl", 00:18:48.015 "recv_buf_size": 4096, 00:18:48.015 "send_buf_size": 4096, 00:18:48.015 "enable_recv_pipe": true, 00:18:48.015 "enable_quickack": false, 00:18:48.015 "enable_placement_id": 0, 00:18:48.015 "enable_zerocopy_send_server": true, 00:18:48.015 "enable_zerocopy_send_client": false, 00:18:48.015 "zerocopy_threshold": 0, 00:18:48.015 "tls_version": 0, 00:18:48.015 "enable_ktls": false 00:18:48.015 } 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "method": "sock_impl_set_options", 00:18:48.015 "params": { 00:18:48.015 "impl_name": "posix", 00:18:48.015 "recv_buf_size": 2097152, 00:18:48.015 "send_buf_size": 2097152, 00:18:48.015 "enable_recv_pipe": true, 00:18:48.015 "enable_quickack": false, 00:18:48.015 "enable_placement_id": 0, 00:18:48.015 "enable_zerocopy_send_server": true, 00:18:48.015 "enable_zerocopy_send_client": false, 00:18:48.015 "zerocopy_threshold": 0, 00:18:48.015 "tls_version": 0, 00:18:48.015 "enable_ktls": false 00:18:48.015 } 00:18:48.015 } 00:18:48.015 ] 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "subsystem": "vmd", 00:18:48.015 "config": [] 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "subsystem": "accel", 00:18:48.015 "config": [ 00:18:48.015 { 00:18:48.015 "method": "accel_set_options", 00:18:48.015 "params": { 00:18:48.015 "small_cache_size": 128, 00:18:48.015 "large_cache_size": 16, 00:18:48.015 "task_count": 2048, 00:18:48.015 "sequence_count": 2048, 00:18:48.015 "buf_count": 2048 00:18:48.015 } 00:18:48.015 } 00:18:48.015 ] 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "subsystem": "bdev", 00:18:48.015 "config": [ 00:18:48.015 { 00:18:48.015 "method": "bdev_set_options", 00:18:48.015 "params": { 00:18:48.015 "bdev_io_pool_size": 65535, 00:18:48.015 "bdev_io_cache_size": 256, 00:18:48.015 "bdev_auto_examine": true, 00:18:48.015 "iobuf_small_cache_size": 128, 00:18:48.015 "iobuf_large_cache_size": 16 00:18:48.015 } 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "method": "bdev_raid_set_options", 00:18:48.015 "params": { 00:18:48.015 "process_window_size_kb": 1024, 00:18:48.015 "process_max_bandwidth_mb_sec": 0 00:18:48.015 } 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "method": "bdev_iscsi_set_options", 00:18:48.015 "params": { 00:18:48.015 "timeout_sec": 30 00:18:48.015 } 00:18:48.015 }, 00:18:48.015 { 00:18:48.015 "method": "bdev_nvme_set_options", 00:18:48.015 "params": { 00:18:48.015 "action_on_timeout": "none", 00:18:48.015 "timeout_us": 0, 00:18:48.015 "timeout_admin_us": 0, 00:18:48.015 "keep_alive_timeout_ms": 10000, 00:18:48.015 "arbitration_burst": 0, 00:18:48.015 "low_priority_weight": 0, 00:18:48.015 "medium_priority_weight": 0, 00:18:48.015 "high_priority_weight": 0, 00:18:48.015 "nvme_adminq_poll_period_us": 10000, 00:18:48.016 "nvme_ioq_poll_period_us": 0, 00:18:48.016 "io_queue_requests": 0, 00:18:48.016 "delay_cmd_submit": true, 00:18:48.016 "transport_retry_count": 4, 00:18:48.016 "bdev_retry_count": 3, 00:18:48.016 "transport_ack_timeout": 0, 00:18:48.016 "ctrlr_loss_timeout_sec": 0, 00:18:48.016 "reconnect_delay_sec": 0, 00:18:48.016 "fast_io_fail_timeout_sec": 0, 00:18:48.016 "disable_auto_failback": false, 00:18:48.016 "generate_uuids": false, 00:18:48.016 "transport_tos": 0, 00:18:48.016 "nvme_error_stat": false, 00:18:48.016 "rdma_srq_size": 0, 00:18:48.016 "io_path_stat": false, 00:18:48.016 "allow_accel_sequence": false, 00:18:48.016 "rdma_max_cq_size": 0, 00:18:48.016 "rdma_cm_event_timeout_ms": 0, 00:18:48.016 "dhchap_digests": [ 00:18:48.016 "sha256", 00:18:48.016 "sha384", 00:18:48.016 "sha512" 00:18:48.016 ], 00:18:48.016 "dhchap_dhgroups": [ 00:18:48.016 "null", 00:18:48.016 "ffdhe2048", 00:18:48.016 "ffdhe3072", 00:18:48.016 "ffdhe4096", 00:18:48.016 "ffdhe6144", 00:18:48.016 "ffdhe8192" 00:18:48.016 ] 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "bdev_nvme_set_hotplug", 00:18:48.016 "params": { 00:18:48.016 "period_us": 100000, 00:18:48.016 "enable": false 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "bdev_malloc_create", 00:18:48.016 "params": { 00:18:48.016 "name": "malloc0", 00:18:48.016 "num_blocks": 8192, 00:18:48.016 "block_size": 4096, 00:18:48.016 "physical_block_size": 4096, 00:18:48.016 "uuid": "1f529d02-5ab7-4bba-9865-9945f11c954f", 00:18:48.016 "optimal_io_boundary": 0, 00:18:48.016 "md_size": 0, 00:18:48.016 "dif_type": 0, 00:18:48.016 "dif_is_head_of_md": false, 00:18:48.016 "dif_pi_format": 0 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "bdev_wait_for_examine" 00:18:48.016 } 00:18:48.016 ] 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "subsystem": "nbd", 00:18:48.016 "config": [] 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "subsystem": "scheduler", 00:18:48.016 "config": [ 00:18:48.016 { 00:18:48.016 "method": "framework_set_scheduler", 00:18:48.016 "params": { 00:18:48.016 "name": "static" 00:18:48.016 } 00:18:48.016 } 00:18:48.016 ] 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "subsystem": "nvmf", 00:18:48.016 "config": [ 00:18:48.016 { 00:18:48.016 "method": "nvmf_set_config", 00:18:48.016 "params": { 00:18:48.016 "discovery_filter": "match_any", 00:18:48.016 "admin_cmd_passthru": { 00:18:48.016 "identify_ctrlr": false 00:18:48.016 }, 00:18:48.016 "dhchap_digests": [ 00:18:48.016 "sha256", 00:18:48.016 "sha384", 00:18:48.016 "sha512" 00:18:48.016 ], 00:18:48.016 "dhchap_dhgroups": [ 00:18:48.016 "null", 00:18:48.016 "ffdhe2048", 00:18:48.016 "ffdhe3072", 00:18:48.016 "ffdhe4096", 00:18:48.016 "ffdhe6144", 00:18:48.016 "ffdhe8192" 00:18:48.016 ] 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_set_max_subsystems", 00:18:48.016 "params": { 00:18:48.016 "max_subsystems": 1024 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_set_crdt", 00:18:48.016 "params": { 00:18:48.016 "crdt1": 0, 00:18:48.016 "crdt2": 0, 00:18:48.016 "crdt3": 0 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_create_transport", 00:18:48.016 "params": { 00:18:48.016 "trtype": "TCP", 00:18:48.016 "max_queue_depth": 128, 00:18:48.016 "max_io_qpairs_per_ctrlr": 127, 00:18:48.016 "in_capsule_data_size": 4096, 00:18:48.016 "max_io_size": 131072, 00:18:48.016 "io_unit_size": 131072, 00:18:48.016 "max_aq_depth": 128, 00:18:48.016 "num_shared_buffers": 511, 00:18:48.016 "buf_cache_size": 4294967295, 00:18:48.016 "dif_insert_or_strip": false, 00:18:48.016 "zcopy": false, 00:18:48.016 "c2h_success": false, 00:18:48.016 "sock_priority": 0, 00:18:48.016 "abort_timeout_sec": 1, 00:18:48.016 "ack_timeout": 0, 00:18:48.016 "data_wr_pool_size": 0 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_create_subsystem", 00:18:48.016 "params": { 00:18:48.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.016 "allow_any_host": false, 00:18:48.016 "serial_number": "SPDK00000000000001", 00:18:48.016 "model_number": "SPDK bdev Controller", 00:18:48.016 "max_namespaces": 10, 00:18:48.016 "min_cntlid": 1, 00:18:48.016 "max_cntlid": 65519, 00:18:48.016 "ana_reporting": false 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_subsystem_add_host", 00:18:48.016 "params": { 00:18:48.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.016 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.016 "psk": "key0" 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_subsystem_add_ns", 00:18:48.016 "params": { 00:18:48.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.016 "namespace": { 00:18:48.016 "nsid": 1, 00:18:48.016 "bdev_name": "malloc0", 00:18:48.016 "nguid": "1F529D025AB74BBA98659945F11C954F", 00:18:48.016 "uuid": "1f529d02-5ab7-4bba-9865-9945f11c954f", 00:18:48.016 "no_auto_visible": false 00:18:48.016 } 00:18:48.016 } 00:18:48.016 }, 00:18:48.016 { 00:18:48.016 "method": "nvmf_subsystem_add_listener", 00:18:48.016 "params": { 00:18:48.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.016 "listen_address": { 00:18:48.016 "trtype": "TCP", 00:18:48.016 "adrfam": "IPv4", 00:18:48.016 "traddr": "10.0.0.2", 00:18:48.016 "trsvcid": "4420" 00:18:48.016 }, 00:18:48.016 "secure_channel": true 00:18:48.016 } 00:18:48.016 } 00:18:48.016 ] 00:18:48.016 } 00:18:48.016 ] 00:18:48.016 }' 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=646235 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 646235 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 646235 ']' 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.016 04:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.275 [2024-12-10 04:55:39.185619] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:48.275 [2024-12-10 04:55:39.185671] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.275 [2024-12-10 04:55:39.267356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.275 [2024-12-10 04:55:39.308052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.275 [2024-12-10 04:55:39.308086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.275 [2024-12-10 04:55:39.308094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.276 [2024-12-10 04:55:39.308100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.276 [2024-12-10 04:55:39.308105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.276 [2024-12-10 04:55:39.308665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.534 [2024-12-10 04:55:39.520769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.534 [2024-12-10 04:55:39.552788] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.534 [2024-12-10 04:55:39.552991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=646427 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 646427 /var/tmp/bdevperf.sock 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 646427 ']' 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.103 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:49.103 "subsystems": [ 00:18:49.103 { 00:18:49.103 "subsystem": "keyring", 00:18:49.103 "config": [ 00:18:49.103 { 00:18:49.103 "method": "keyring_file_add_key", 00:18:49.103 "params": { 00:18:49.103 "name": "key0", 00:18:49.103 "path": "/tmp/tmp.Yw4nntCAEq" 00:18:49.103 } 00:18:49.103 } 00:18:49.103 ] 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "subsystem": "iobuf", 00:18:49.103 "config": [ 00:18:49.103 { 00:18:49.103 "method": "iobuf_set_options", 00:18:49.103 "params": { 00:18:49.103 "small_pool_count": 8192, 00:18:49.103 "large_pool_count": 1024, 00:18:49.103 "small_bufsize": 8192, 00:18:49.103 "large_bufsize": 135168, 00:18:49.103 "enable_numa": false 00:18:49.103 } 00:18:49.103 } 00:18:49.103 ] 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "subsystem": "sock", 00:18:49.103 "config": [ 00:18:49.103 { 00:18:49.103 "method": "sock_set_default_impl", 00:18:49.103 "params": { 00:18:49.103 "impl_name": "posix" 00:18:49.103 } 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "method": "sock_impl_set_options", 00:18:49.103 "params": { 00:18:49.103 "impl_name": "ssl", 00:18:49.103 "recv_buf_size": 4096, 00:18:49.103 "send_buf_size": 4096, 00:18:49.103 "enable_recv_pipe": true, 00:18:49.103 "enable_quickack": false, 00:18:49.103 "enable_placement_id": 0, 00:18:49.103 "enable_zerocopy_send_server": true, 00:18:49.103 "enable_zerocopy_send_client": false, 00:18:49.103 "zerocopy_threshold": 0, 00:18:49.103 "tls_version": 0, 00:18:49.103 "enable_ktls": false 00:18:49.103 } 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "method": "sock_impl_set_options", 00:18:49.103 "params": { 00:18:49.103 "impl_name": "posix", 00:18:49.103 "recv_buf_size": 2097152, 00:18:49.103 "send_buf_size": 2097152, 00:18:49.103 "enable_recv_pipe": true, 00:18:49.103 "enable_quickack": false, 00:18:49.103 "enable_placement_id": 0, 00:18:49.103 "enable_zerocopy_send_server": true, 00:18:49.103 "enable_zerocopy_send_client": false, 00:18:49.103 "zerocopy_threshold": 0, 00:18:49.103 "tls_version": 0, 00:18:49.103 "enable_ktls": false 00:18:49.103 } 00:18:49.103 } 00:18:49.103 ] 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "subsystem": "vmd", 00:18:49.103 "config": [] 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "subsystem": "accel", 00:18:49.103 "config": [ 00:18:49.103 { 00:18:49.103 "method": "accel_set_options", 00:18:49.103 "params": { 00:18:49.103 "small_cache_size": 128, 00:18:49.103 "large_cache_size": 16, 00:18:49.103 "task_count": 2048, 00:18:49.103 "sequence_count": 2048, 00:18:49.103 "buf_count": 2048 00:18:49.103 } 00:18:49.103 } 00:18:49.103 ] 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "subsystem": "bdev", 00:18:49.103 "config": [ 00:18:49.103 { 00:18:49.103 "method": "bdev_set_options", 00:18:49.103 "params": { 00:18:49.103 "bdev_io_pool_size": 65535, 00:18:49.103 "bdev_io_cache_size": 256, 00:18:49.103 "bdev_auto_examine": true, 00:18:49.103 "iobuf_small_cache_size": 128, 00:18:49.103 "iobuf_large_cache_size": 16 00:18:49.103 } 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "method": "bdev_raid_set_options", 00:18:49.103 "params": { 00:18:49.103 "process_window_size_kb": 1024, 00:18:49.103 "process_max_bandwidth_mb_sec": 0 00:18:49.103 } 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "method": "bdev_iscsi_set_options", 00:18:49.103 "params": { 00:18:49.103 "timeout_sec": 30 00:18:49.103 } 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "method": "bdev_nvme_set_options", 00:18:49.103 "params": { 00:18:49.103 "action_on_timeout": "none", 00:18:49.103 "timeout_us": 0, 00:18:49.103 "timeout_admin_us": 0, 00:18:49.103 "keep_alive_timeout_ms": 10000, 00:18:49.103 "arbitration_burst": 0, 00:18:49.103 "low_priority_weight": 0, 00:18:49.103 "medium_priority_weight": 0, 00:18:49.103 "high_priority_weight": 0, 00:18:49.103 "nvme_adminq_poll_period_us": 10000, 00:18:49.103 "nvme_ioq_poll_period_us": 0, 00:18:49.103 "io_queue_requests": 512, 00:18:49.103 "delay_cmd_submit": true, 00:18:49.103 "transport_retry_count": 4, 00:18:49.103 "bdev_retry_count": 3, 00:18:49.103 "transport_ack_timeout": 0, 00:18:49.103 "ctrlr_loss_timeout_sec": 0, 00:18:49.103 "reconnect_delay_sec": 0, 00:18:49.103 "fast_io_fail_timeout_sec": 0, 00:18:49.103 "disable_auto_failback": false, 00:18:49.103 "generate_uuids": false, 00:18:49.103 "transport_tos": 0, 00:18:49.103 "nvme_error_stat": false, 00:18:49.103 "rdma_srq_size": 0, 00:18:49.103 "io_path_stat": false, 00:18:49.103 "allow_accel_sequence": false, 00:18:49.103 "rdma_max_cq_size": 0, 00:18:49.103 "rdma_cm_event_timeout_ms": 0, 00:18:49.103 "dhchap_digests": [ 00:18:49.103 "sha256", 00:18:49.103 "sha384", 00:18:49.103 "sha512" 00:18:49.103 ], 00:18:49.103 "dhchap_dhgroups": [ 00:18:49.103 "null", 00:18:49.103 "ffdhe2048", 00:18:49.103 "ffdhe3072", 00:18:49.103 "ffdhe4096", 00:18:49.103 "ffdhe6144", 00:18:49.103 "ffdhe8192" 00:18:49.103 ] 00:18:49.103 } 00:18:49.103 }, 00:18:49.103 { 00:18:49.103 "method": "bdev_nvme_attach_controller", 00:18:49.103 "params": { 00:18:49.103 "name": "TLSTEST", 00:18:49.103 "trtype": "TCP", 00:18:49.103 "adrfam": "IPv4", 00:18:49.103 "traddr": "10.0.0.2", 00:18:49.103 "trsvcid": "4420", 00:18:49.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.103 "prchk_reftag": false, 00:18:49.103 "prchk_guard": false, 00:18:49.103 "ctrlr_loss_timeout_sec": 0, 00:18:49.103 "reconnect_delay_sec": 0, 00:18:49.103 "fast_io_fail_timeout_sec": 0, 00:18:49.103 "psk": "key0", 00:18:49.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.104 "hdgst": false, 00:18:49.104 "ddgst": false, 00:18:49.104 "multipath": "multipath" 00:18:49.104 } 00:18:49.104 }, 00:18:49.104 { 00:18:49.104 "method": "bdev_nvme_set_hotplug", 00:18:49.104 "params": { 00:18:49.104 "period_us": 100000, 00:18:49.104 "enable": false 00:18:49.104 } 00:18:49.104 }, 00:18:49.104 { 00:18:49.104 "method": "bdev_wait_for_examine" 00:18:49.104 } 00:18:49.104 ] 00:18:49.104 }, 00:18:49.104 { 00:18:49.104 "subsystem": "nbd", 00:18:49.104 "config": [] 00:18:49.104 } 00:18:49.104 ] 00:18:49.104 }' 00:18:49.104 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.104 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.104 [2024-12-10 04:55:40.101787] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:49.104 [2024-12-10 04:55:40.101837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646427 ] 00:18:49.104 [2024-12-10 04:55:40.176608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.104 [2024-12-10 04:55:40.217282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.362 [2024-12-10 04:55:40.371111] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.930 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.930 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.930 04:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.930 Running I/O for 10 seconds... 00:18:52.244 5643.00 IOPS, 22.04 MiB/s [2024-12-10T03:55:44.317Z] 5661.50 IOPS, 22.12 MiB/s [2024-12-10T03:55:45.252Z] 5522.33 IOPS, 21.57 MiB/s [2024-12-10T03:55:46.187Z] 5533.00 IOPS, 21.61 MiB/s [2024-12-10T03:55:47.124Z] 5522.20 IOPS, 21.57 MiB/s [2024-12-10T03:55:48.060Z] 5556.33 IOPS, 21.70 MiB/s [2024-12-10T03:55:49.436Z] 5568.14 IOPS, 21.75 MiB/s [2024-12-10T03:55:50.372Z] 5551.38 IOPS, 21.69 MiB/s [2024-12-10T03:55:51.308Z] 5559.33 IOPS, 21.72 MiB/s [2024-12-10T03:55:51.308Z] 5554.20 IOPS, 21.70 MiB/s 00:19:00.171 Latency(us) 00:19:00.171 [2024-12-10T03:55:51.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.172 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.172 Verification LBA range: start 0x0 length 0x2000 00:19:00.172 TLSTESTn1 : 10.01 5558.58 21.71 0.00 0.00 22993.13 4899.60 23218.47 00:19:00.172 [2024-12-10T03:55:51.309Z] =================================================================================================================== 00:19:00.172 [2024-12-10T03:55:51.309Z] Total : 5558.58 21.71 0.00 0.00 22993.13 4899.60 23218.47 00:19:00.172 { 00:19:00.172 "results": [ 00:19:00.172 { 00:19:00.172 "job": "TLSTESTn1", 00:19:00.172 "core_mask": "0x4", 00:19:00.172 "workload": "verify", 00:19:00.172 "status": "finished", 00:19:00.172 "verify_range": { 00:19:00.172 "start": 0, 00:19:00.172 "length": 8192 00:19:00.172 }, 00:19:00.172 "queue_depth": 128, 00:19:00.172 "io_size": 4096, 00:19:00.172 "runtime": 10.014968, 00:19:00.172 "iops": 5558.579917579367, 00:19:00.172 "mibps": 21.713202803044403, 00:19:00.172 "io_failed": 0, 00:19:00.172 "io_timeout": 0, 00:19:00.172 "avg_latency_us": 22993.1286405959, 00:19:00.172 "min_latency_us": 4899.596190476191, 00:19:00.172 "max_latency_us": 23218.46857142857 00:19:00.172 } 00:19:00.172 ], 00:19:00.172 "core_count": 1 00:19:00.172 } 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 646427 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 646427 ']' 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 646427 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646427 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646427' 00:19:00.172 killing process with pid 646427 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 646427 00:19:00.172 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.172 00:19:00.172 Latency(us) 00:19:00.172 [2024-12-10T03:55:51.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.172 [2024-12-10T03:55:51.309Z] =================================================================================================================== 00:19:00.172 [2024-12-10T03:55:51.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 646427 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 646235 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 646235 ']' 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 646235 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.172 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646235 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646235' 00:19:00.431 killing process with pid 646235 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 646235 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 646235 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=648223 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 648223 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 648223 ']' 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.431 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.690 [2024-12-10 04:55:51.566225] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:00.690 [2024-12-10 04:55:51.566273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.690 [2024-12-10 04:55:51.644063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.690 [2024-12-10 04:55:51.680726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.690 [2024-12-10 04:55:51.680761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.691 [2024-12-10 04:55:51.680768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.691 [2024-12-10 04:55:51.680773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.691 [2024-12-10 04:55:51.680778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.691 [2024-12-10 04:55:51.681294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Yw4nntCAEq 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Yw4nntCAEq 00:19:00.691 04:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:00.949 [2024-12-10 04:55:51.991000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.949 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.208 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:01.467 [2024-12-10 04:55:52.367940] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.467 [2024-12-10 04:55:52.368158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.467 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:01.467 malloc0 00:19:01.467 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:01.726 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:19:01.984 04:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=648559 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 648559 /var/tmp/bdevperf.sock 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 648559 ']' 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.244 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.244 [2024-12-10 04:55:53.229734] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:02.244 [2024-12-10 04:55:53.229786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648559 ] 00:19:02.244 [2024-12-10 04:55:53.305649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.244 [2024-12-10 04:55:53.344972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.503 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.503 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.503 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:19:02.762 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.762 [2024-12-10 04:55:53.813401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.762 nvme0n1 00:19:03.020 04:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.020 Running I/O for 1 seconds... 00:19:03.956 5278.00 IOPS, 20.62 MiB/s 00:19:03.956 Latency(us) 00:19:03.957 [2024-12-10T03:55:55.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.957 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.957 Verification LBA range: start 0x0 length 0x2000 00:19:03.957 nvme0n1 : 1.01 5340.18 20.86 0.00 0.00 23814.24 5211.67 38198.13 00:19:03.957 [2024-12-10T03:55:55.094Z] =================================================================================================================== 00:19:03.957 [2024-12-10T03:55:55.094Z] Total : 5340.18 20.86 0.00 0.00 23814.24 5211.67 38198.13 00:19:03.957 { 00:19:03.957 "results": [ 00:19:03.957 { 00:19:03.957 "job": "nvme0n1", 00:19:03.957 "core_mask": "0x2", 00:19:03.957 "workload": "verify", 00:19:03.957 "status": "finished", 00:19:03.957 "verify_range": { 00:19:03.957 "start": 0, 00:19:03.957 "length": 8192 00:19:03.957 }, 00:19:03.957 "queue_depth": 128, 00:19:03.957 "io_size": 4096, 00:19:03.957 "runtime": 1.012326, 00:19:03.957 "iops": 5340.176978562242, 00:19:03.957 "mibps": 20.860066322508757, 00:19:03.957 "io_failed": 0, 00:19:03.957 "io_timeout": 0, 00:19:03.957 "avg_latency_us": 23814.236220072937, 00:19:03.957 "min_latency_us": 5211.672380952381, 00:19:03.957 "max_latency_us": 38198.125714285714 00:19:03.957 } 00:19:03.957 ], 00:19:03.957 "core_count": 1 00:19:03.957 } 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 648559 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 648559 ']' 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 648559 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648559 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648559' 00:19:03.957 killing process with pid 648559 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 648559 00:19:03.957 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.957 00:19:03.957 Latency(us) 00:19:03.957 [2024-12-10T03:55:55.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.957 [2024-12-10T03:55:55.094Z] =================================================================================================================== 00:19:03.957 [2024-12-10T03:55:55.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.957 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 648559 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 648223 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 648223 ']' 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 648223 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648223 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648223' 00:19:04.215 killing process with pid 648223 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 648223 00:19:04.215 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 648223 00:19:04.473 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:04.473 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.473 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.473 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=648931 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 648931 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 648931 ']' 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.474 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.474 [2024-12-10 04:55:55.518398] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:04.474 [2024-12-10 04:55:55.518446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.474 [2024-12-10 04:55:55.576906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.732 [2024-12-10 04:55:55.617152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.732 [2024-12-10 04:55:55.617191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.732 [2024-12-10 04:55:55.617199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.732 [2024-12-10 04:55:55.617205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.732 [2024-12-10 04:55:55.617210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.732 [2024-12-10 04:55:55.617699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.732 [2024-12-10 04:55:55.760898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.732 malloc0 00:19:04.732 [2024-12-10 04:55:55.788809] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.732 [2024-12-10 04:55:55.789033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=648963 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 648963 /var/tmp/bdevperf.sock 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 648963 ']' 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.732 04:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.732 [2024-12-10 04:55:55.862196] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:04.733 [2024-12-10 04:55:55.862241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648963 ] 00:19:04.991 [2024-12-10 04:55:55.935893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.991 [2024-12-10 04:55:55.975705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.991 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.991 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.991 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Yw4nntCAEq 00:19:05.250 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:05.509 [2024-12-10 04:55:56.431306] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.509 nvme0n1 00:19:05.509 04:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.509 Running I/O for 1 seconds... 00:19:06.887 5308.00 IOPS, 20.73 MiB/s 00:19:06.887 Latency(us) 00:19:06.887 [2024-12-10T03:55:58.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.887 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:06.887 Verification LBA range: start 0x0 length 0x2000 00:19:06.887 nvme0n1 : 1.01 5372.13 20.98 0.00 0.00 23673.17 4805.97 52179.14 00:19:06.887 [2024-12-10T03:55:58.024Z] =================================================================================================================== 00:19:06.887 [2024-12-10T03:55:58.024Z] Total : 5372.13 20.98 0.00 0.00 23673.17 4805.97 52179.14 00:19:06.887 { 00:19:06.887 "results": [ 00:19:06.887 { 00:19:06.887 "job": "nvme0n1", 00:19:06.887 "core_mask": "0x2", 00:19:06.887 "workload": "verify", 00:19:06.887 "status": "finished", 00:19:06.887 "verify_range": { 00:19:06.887 "start": 0, 00:19:06.887 "length": 8192 00:19:06.887 }, 00:19:06.887 "queue_depth": 128, 00:19:06.887 "io_size": 4096, 00:19:06.887 "runtime": 1.01189, 00:19:06.887 "iops": 5372.125428653312, 00:19:06.887 "mibps": 20.984864955677, 00:19:06.887 "io_failed": 0, 00:19:06.887 "io_timeout": 0, 00:19:06.887 "avg_latency_us": 23673.166336592032, 00:19:06.887 "min_latency_us": 4805.973333333333, 00:19:06.887 "max_latency_us": 52179.13904761905 00:19:06.887 } 00:19:06.887 ], 00:19:06.887 "core_count": 1 00:19:06.887 } 00:19:06.887 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:06.887 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.887 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.887 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.887 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:06.887 "subsystems": [ 00:19:06.887 { 00:19:06.887 "subsystem": "keyring", 00:19:06.887 "config": [ 00:19:06.887 { 00:19:06.887 "method": "keyring_file_add_key", 00:19:06.887 "params": { 00:19:06.887 "name": "key0", 00:19:06.887 "path": "/tmp/tmp.Yw4nntCAEq" 00:19:06.887 } 00:19:06.887 } 00:19:06.887 ] 00:19:06.887 }, 00:19:06.887 { 00:19:06.887 "subsystem": "iobuf", 00:19:06.887 "config": [ 00:19:06.887 { 00:19:06.887 "method": "iobuf_set_options", 00:19:06.887 "params": { 00:19:06.887 "small_pool_count": 8192, 00:19:06.887 "large_pool_count": 1024, 00:19:06.887 "small_bufsize": 8192, 00:19:06.887 "large_bufsize": 135168, 00:19:06.887 "enable_numa": false 00:19:06.887 } 00:19:06.887 } 00:19:06.887 ] 00:19:06.887 }, 00:19:06.887 { 00:19:06.887 "subsystem": "sock", 00:19:06.887 "config": [ 00:19:06.887 { 00:19:06.887 "method": "sock_set_default_impl", 00:19:06.887 "params": { 00:19:06.887 "impl_name": "posix" 00:19:06.887 } 00:19:06.887 }, 00:19:06.887 { 00:19:06.887 "method": "sock_impl_set_options", 00:19:06.887 "params": { 00:19:06.887 "impl_name": "ssl", 00:19:06.887 "recv_buf_size": 4096, 00:19:06.887 "send_buf_size": 4096, 00:19:06.887 "enable_recv_pipe": true, 00:19:06.887 "enable_quickack": false, 00:19:06.887 "enable_placement_id": 0, 00:19:06.887 "enable_zerocopy_send_server": true, 00:19:06.887 "enable_zerocopy_send_client": false, 00:19:06.887 "zerocopy_threshold": 0, 00:19:06.887 "tls_version": 0, 00:19:06.887 "enable_ktls": false 00:19:06.887 } 00:19:06.887 }, 00:19:06.887 { 00:19:06.887 "method": "sock_impl_set_options", 00:19:06.887 "params": { 00:19:06.887 "impl_name": "posix", 00:19:06.887 "recv_buf_size": 2097152, 00:19:06.887 "send_buf_size": 2097152, 00:19:06.887 "enable_recv_pipe": true, 00:19:06.887 "enable_quickack": false, 00:19:06.887 "enable_placement_id": 0, 00:19:06.887 "enable_zerocopy_send_server": true, 00:19:06.887 "enable_zerocopy_send_client": false, 00:19:06.887 "zerocopy_threshold": 0, 00:19:06.887 "tls_version": 0, 00:19:06.887 "enable_ktls": false 00:19:06.887 } 00:19:06.887 } 00:19:06.888 ] 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "subsystem": "vmd", 00:19:06.888 "config": [] 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "subsystem": "accel", 00:19:06.888 "config": [ 00:19:06.888 { 00:19:06.888 "method": "accel_set_options", 00:19:06.888 "params": { 00:19:06.888 "small_cache_size": 128, 00:19:06.888 "large_cache_size": 16, 00:19:06.888 "task_count": 2048, 00:19:06.888 "sequence_count": 2048, 00:19:06.888 "buf_count": 2048 00:19:06.888 } 00:19:06.888 } 00:19:06.888 ] 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "subsystem": "bdev", 00:19:06.888 "config": [ 00:19:06.888 { 00:19:06.888 "method": "bdev_set_options", 00:19:06.888 "params": { 00:19:06.888 "bdev_io_pool_size": 65535, 00:19:06.888 "bdev_io_cache_size": 256, 00:19:06.888 "bdev_auto_examine": true, 00:19:06.888 "iobuf_small_cache_size": 128, 00:19:06.888 "iobuf_large_cache_size": 16 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "bdev_raid_set_options", 00:19:06.888 "params": { 00:19:06.888 "process_window_size_kb": 1024, 00:19:06.888 "process_max_bandwidth_mb_sec": 0 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "bdev_iscsi_set_options", 00:19:06.888 "params": { 00:19:06.888 "timeout_sec": 30 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "bdev_nvme_set_options", 00:19:06.888 "params": { 00:19:06.888 "action_on_timeout": "none", 00:19:06.888 "timeout_us": 0, 00:19:06.888 "timeout_admin_us": 0, 00:19:06.888 "keep_alive_timeout_ms": 10000, 00:19:06.888 "arbitration_burst": 0, 00:19:06.888 "low_priority_weight": 0, 00:19:06.888 "medium_priority_weight": 0, 00:19:06.888 "high_priority_weight": 0, 00:19:06.888 "nvme_adminq_poll_period_us": 10000, 00:19:06.888 "nvme_ioq_poll_period_us": 0, 00:19:06.888 "io_queue_requests": 0, 00:19:06.888 "delay_cmd_submit": true, 00:19:06.888 "transport_retry_count": 4, 00:19:06.888 "bdev_retry_count": 3, 00:19:06.888 "transport_ack_timeout": 0, 00:19:06.888 "ctrlr_loss_timeout_sec": 0, 00:19:06.888 "reconnect_delay_sec": 0, 00:19:06.888 "fast_io_fail_timeout_sec": 0, 00:19:06.888 "disable_auto_failback": false, 00:19:06.888 "generate_uuids": false, 00:19:06.888 "transport_tos": 0, 00:19:06.888 "nvme_error_stat": false, 00:19:06.888 "rdma_srq_size": 0, 00:19:06.888 "io_path_stat": false, 00:19:06.888 "allow_accel_sequence": false, 00:19:06.888 "rdma_max_cq_size": 0, 00:19:06.888 "rdma_cm_event_timeout_ms": 0, 00:19:06.888 "dhchap_digests": [ 00:19:06.888 "sha256", 00:19:06.888 "sha384", 00:19:06.888 "sha512" 00:19:06.888 ], 00:19:06.888 "dhchap_dhgroups": [ 00:19:06.888 "null", 00:19:06.888 "ffdhe2048", 00:19:06.888 "ffdhe3072", 00:19:06.888 "ffdhe4096", 00:19:06.888 "ffdhe6144", 00:19:06.888 "ffdhe8192" 00:19:06.888 ] 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "bdev_nvme_set_hotplug", 00:19:06.888 "params": { 00:19:06.888 "period_us": 100000, 00:19:06.888 "enable": false 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "bdev_malloc_create", 00:19:06.888 "params": { 00:19:06.888 "name": "malloc0", 00:19:06.888 "num_blocks": 8192, 00:19:06.888 "block_size": 4096, 00:19:06.888 "physical_block_size": 4096, 00:19:06.888 "uuid": "00e29cd1-e721-417b-8d22-312b7c2aff2a", 00:19:06.888 "optimal_io_boundary": 0, 00:19:06.888 "md_size": 0, 00:19:06.888 "dif_type": 0, 00:19:06.888 "dif_is_head_of_md": false, 00:19:06.888 "dif_pi_format": 0 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "bdev_wait_for_examine" 00:19:06.888 } 00:19:06.888 ] 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "subsystem": "nbd", 00:19:06.888 "config": [] 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "subsystem": "scheduler", 00:19:06.888 "config": [ 00:19:06.888 { 00:19:06.888 "method": "framework_set_scheduler", 00:19:06.888 "params": { 00:19:06.888 "name": "static" 00:19:06.888 } 00:19:06.888 } 00:19:06.888 ] 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "subsystem": "nvmf", 00:19:06.888 "config": [ 00:19:06.888 { 00:19:06.888 "method": "nvmf_set_config", 00:19:06.888 "params": { 00:19:06.888 "discovery_filter": "match_any", 00:19:06.888 "admin_cmd_passthru": { 00:19:06.888 "identify_ctrlr": false 00:19:06.888 }, 00:19:06.888 "dhchap_digests": [ 00:19:06.888 "sha256", 00:19:06.888 "sha384", 00:19:06.888 "sha512" 00:19:06.888 ], 00:19:06.888 "dhchap_dhgroups": [ 00:19:06.888 "null", 00:19:06.888 "ffdhe2048", 00:19:06.888 "ffdhe3072", 00:19:06.888 "ffdhe4096", 00:19:06.888 "ffdhe6144", 00:19:06.888 "ffdhe8192" 00:19:06.888 ] 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_set_max_subsystems", 00:19:06.888 "params": { 00:19:06.888 "max_subsystems": 1024 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_set_crdt", 00:19:06.888 "params": { 00:19:06.888 "crdt1": 0, 00:19:06.888 "crdt2": 0, 00:19:06.888 "crdt3": 0 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_create_transport", 00:19:06.888 "params": { 00:19:06.888 "trtype": "TCP", 00:19:06.888 "max_queue_depth": 128, 00:19:06.888 "max_io_qpairs_per_ctrlr": 127, 00:19:06.888 "in_capsule_data_size": 4096, 00:19:06.888 "max_io_size": 131072, 00:19:06.888 "io_unit_size": 131072, 00:19:06.888 "max_aq_depth": 128, 00:19:06.888 "num_shared_buffers": 511, 00:19:06.888 "buf_cache_size": 4294967295, 00:19:06.888 "dif_insert_or_strip": false, 00:19:06.888 "zcopy": false, 00:19:06.888 "c2h_success": false, 00:19:06.888 "sock_priority": 0, 00:19:06.888 "abort_timeout_sec": 1, 00:19:06.888 "ack_timeout": 0, 00:19:06.888 "data_wr_pool_size": 0 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_create_subsystem", 00:19:06.888 "params": { 00:19:06.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.888 "allow_any_host": false, 00:19:06.888 "serial_number": "00000000000000000000", 00:19:06.888 "model_number": "SPDK bdev Controller", 00:19:06.888 "max_namespaces": 32, 00:19:06.888 "min_cntlid": 1, 00:19:06.888 "max_cntlid": 65519, 00:19:06.888 "ana_reporting": false 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_subsystem_add_host", 00:19:06.888 "params": { 00:19:06.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.888 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.888 "psk": "key0" 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_subsystem_add_ns", 00:19:06.888 "params": { 00:19:06.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.888 "namespace": { 00:19:06.888 "nsid": 1, 00:19:06.888 "bdev_name": "malloc0", 00:19:06.888 "nguid": "00E29CD1E721417B8D22312B7C2AFF2A", 00:19:06.888 "uuid": "00e29cd1-e721-417b-8d22-312b7c2aff2a", 00:19:06.888 "no_auto_visible": false 00:19:06.888 } 00:19:06.888 } 00:19:06.888 }, 00:19:06.888 { 00:19:06.888 "method": "nvmf_subsystem_add_listener", 00:19:06.888 "params": { 00:19:06.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.888 "listen_address": { 00:19:06.888 "trtype": "TCP", 00:19:06.888 "adrfam": "IPv4", 00:19:06.888 "traddr": "10.0.0.2", 00:19:06.888 "trsvcid": "4420" 00:19:06.888 }, 00:19:06.888 "secure_channel": false, 00:19:06.888 "sock_impl": "ssl" 00:19:06.888 } 00:19:06.888 } 00:19:06.888 ] 00:19:06.888 } 00:19:06.888 ] 00:19:06.888 }' 00:19:06.888 04:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:07.148 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:07.148 "subsystems": [ 00:19:07.148 { 00:19:07.148 "subsystem": "keyring", 00:19:07.148 "config": [ 00:19:07.148 { 00:19:07.148 "method": "keyring_file_add_key", 00:19:07.148 "params": { 00:19:07.148 "name": "key0", 00:19:07.148 "path": "/tmp/tmp.Yw4nntCAEq" 00:19:07.148 } 00:19:07.148 } 00:19:07.148 ] 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "subsystem": "iobuf", 00:19:07.148 "config": [ 00:19:07.148 { 00:19:07.148 "method": "iobuf_set_options", 00:19:07.148 "params": { 00:19:07.148 "small_pool_count": 8192, 00:19:07.148 "large_pool_count": 1024, 00:19:07.148 "small_bufsize": 8192, 00:19:07.148 "large_bufsize": 135168, 00:19:07.148 "enable_numa": false 00:19:07.148 } 00:19:07.148 } 00:19:07.148 ] 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "subsystem": "sock", 00:19:07.148 "config": [ 00:19:07.148 { 00:19:07.148 "method": "sock_set_default_impl", 00:19:07.148 "params": { 00:19:07.148 "impl_name": "posix" 00:19:07.148 } 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "method": "sock_impl_set_options", 00:19:07.148 "params": { 00:19:07.148 "impl_name": "ssl", 00:19:07.148 "recv_buf_size": 4096, 00:19:07.148 "send_buf_size": 4096, 00:19:07.148 "enable_recv_pipe": true, 00:19:07.148 "enable_quickack": false, 00:19:07.148 "enable_placement_id": 0, 00:19:07.148 "enable_zerocopy_send_server": true, 00:19:07.148 "enable_zerocopy_send_client": false, 00:19:07.148 "zerocopy_threshold": 0, 00:19:07.148 "tls_version": 0, 00:19:07.148 "enable_ktls": false 00:19:07.148 } 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "method": "sock_impl_set_options", 00:19:07.148 "params": { 00:19:07.148 "impl_name": "posix", 00:19:07.148 "recv_buf_size": 2097152, 00:19:07.148 "send_buf_size": 2097152, 00:19:07.148 "enable_recv_pipe": true, 00:19:07.148 "enable_quickack": false, 00:19:07.148 "enable_placement_id": 0, 00:19:07.148 "enable_zerocopy_send_server": true, 00:19:07.148 "enable_zerocopy_send_client": false, 00:19:07.148 "zerocopy_threshold": 0, 00:19:07.148 "tls_version": 0, 00:19:07.148 "enable_ktls": false 00:19:07.148 } 00:19:07.148 } 00:19:07.148 ] 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "subsystem": "vmd", 00:19:07.148 "config": [] 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "subsystem": "accel", 00:19:07.148 "config": [ 00:19:07.148 { 00:19:07.148 "method": "accel_set_options", 00:19:07.148 "params": { 00:19:07.148 "small_cache_size": 128, 00:19:07.148 "large_cache_size": 16, 00:19:07.148 "task_count": 2048, 00:19:07.148 "sequence_count": 2048, 00:19:07.148 "buf_count": 2048 00:19:07.148 } 00:19:07.148 } 00:19:07.148 ] 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "subsystem": "bdev", 00:19:07.148 "config": [ 00:19:07.148 { 00:19:07.148 "method": "bdev_set_options", 00:19:07.148 "params": { 00:19:07.148 "bdev_io_pool_size": 65535, 00:19:07.148 "bdev_io_cache_size": 256, 00:19:07.148 "bdev_auto_examine": true, 00:19:07.148 "iobuf_small_cache_size": 128, 00:19:07.148 "iobuf_large_cache_size": 16 00:19:07.148 } 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "method": "bdev_raid_set_options", 00:19:07.148 "params": { 00:19:07.148 "process_window_size_kb": 1024, 00:19:07.148 "process_max_bandwidth_mb_sec": 0 00:19:07.148 } 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "method": "bdev_iscsi_set_options", 00:19:07.148 "params": { 00:19:07.148 "timeout_sec": 30 00:19:07.148 } 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "method": "bdev_nvme_set_options", 00:19:07.148 "params": { 00:19:07.148 "action_on_timeout": "none", 00:19:07.148 "timeout_us": 0, 00:19:07.148 "timeout_admin_us": 0, 00:19:07.148 "keep_alive_timeout_ms": 10000, 00:19:07.148 "arbitration_burst": 0, 00:19:07.148 "low_priority_weight": 0, 00:19:07.148 "medium_priority_weight": 0, 00:19:07.148 "high_priority_weight": 0, 00:19:07.148 "nvme_adminq_poll_period_us": 10000, 00:19:07.148 "nvme_ioq_poll_period_us": 0, 00:19:07.148 "io_queue_requests": 512, 00:19:07.148 "delay_cmd_submit": true, 00:19:07.148 "transport_retry_count": 4, 00:19:07.148 "bdev_retry_count": 3, 00:19:07.148 "transport_ack_timeout": 0, 00:19:07.148 "ctrlr_loss_timeout_sec": 0, 00:19:07.148 "reconnect_delay_sec": 0, 00:19:07.148 "fast_io_fail_timeout_sec": 0, 00:19:07.148 "disable_auto_failback": false, 00:19:07.148 "generate_uuids": false, 00:19:07.148 "transport_tos": 0, 00:19:07.148 "nvme_error_stat": false, 00:19:07.148 "rdma_srq_size": 0, 00:19:07.148 "io_path_stat": false, 00:19:07.148 "allow_accel_sequence": false, 00:19:07.148 "rdma_max_cq_size": 0, 00:19:07.148 "rdma_cm_event_timeout_ms": 0, 00:19:07.148 "dhchap_digests": [ 00:19:07.148 "sha256", 00:19:07.148 "sha384", 00:19:07.148 "sha512" 00:19:07.148 ], 00:19:07.148 "dhchap_dhgroups": [ 00:19:07.148 "null", 00:19:07.148 "ffdhe2048", 00:19:07.148 "ffdhe3072", 00:19:07.148 "ffdhe4096", 00:19:07.148 "ffdhe6144", 00:19:07.148 "ffdhe8192" 00:19:07.148 ] 00:19:07.148 } 00:19:07.148 }, 00:19:07.148 { 00:19:07.148 "method": "bdev_nvme_attach_controller", 00:19:07.148 "params": { 00:19:07.148 "name": "nvme0", 00:19:07.148 "trtype": "TCP", 00:19:07.148 "adrfam": "IPv4", 00:19:07.148 "traddr": "10.0.0.2", 00:19:07.148 "trsvcid": "4420", 00:19:07.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.148 "prchk_reftag": false, 00:19:07.148 "prchk_guard": false, 00:19:07.148 "ctrlr_loss_timeout_sec": 0, 00:19:07.148 "reconnect_delay_sec": 0, 00:19:07.148 "fast_io_fail_timeout_sec": 0, 00:19:07.148 "psk": "key0", 00:19:07.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.149 "hdgst": false, 00:19:07.149 "ddgst": false, 00:19:07.149 "multipath": "multipath" 00:19:07.149 } 00:19:07.149 }, 00:19:07.149 { 00:19:07.149 "method": "bdev_nvme_set_hotplug", 00:19:07.149 "params": { 00:19:07.149 "period_us": 100000, 00:19:07.149 "enable": false 00:19:07.149 } 00:19:07.149 }, 00:19:07.149 { 00:19:07.149 "method": "bdev_enable_histogram", 00:19:07.149 "params": { 00:19:07.149 "name": "nvme0n1", 00:19:07.149 "enable": true 00:19:07.149 } 00:19:07.149 }, 00:19:07.149 { 00:19:07.149 "method": "bdev_wait_for_examine" 00:19:07.149 } 00:19:07.149 ] 00:19:07.149 }, 00:19:07.149 { 00:19:07.149 "subsystem": "nbd", 00:19:07.149 "config": [] 00:19:07.149 } 00:19:07.149 ] 00:19:07.149 }' 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 648963 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 648963 ']' 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 648963 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648963 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648963' 00:19:07.149 killing process with pid 648963 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 648963 00:19:07.149 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.149 00:19:07.149 Latency(us) 00:19:07.149 [2024-12-10T03:55:58.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.149 [2024-12-10T03:55:58.286Z] =================================================================================================================== 00:19:07.149 [2024-12-10T03:55:58.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 648963 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 648931 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 648931 ']' 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 648931 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.149 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648931 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648931' 00:19:07.409 killing process with pid 648931 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 648931 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 648931 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.409 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:07.409 "subsystems": [ 00:19:07.409 { 00:19:07.409 "subsystem": "keyring", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "keyring_file_add_key", 00:19:07.409 "params": { 00:19:07.409 "name": "key0", 00:19:07.409 "path": "/tmp/tmp.Yw4nntCAEq" 00:19:07.409 } 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "iobuf", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "iobuf_set_options", 00:19:07.409 "params": { 00:19:07.409 "small_pool_count": 8192, 00:19:07.409 "large_pool_count": 1024, 00:19:07.409 "small_bufsize": 8192, 00:19:07.409 "large_bufsize": 135168, 00:19:07.409 "enable_numa": false 00:19:07.409 } 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "sock", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "sock_set_default_impl", 00:19:07.409 "params": { 00:19:07.409 "impl_name": "posix" 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "sock_impl_set_options", 00:19:07.409 "params": { 00:19:07.409 "impl_name": "ssl", 00:19:07.409 "recv_buf_size": 4096, 00:19:07.409 "send_buf_size": 4096, 00:19:07.409 "enable_recv_pipe": true, 00:19:07.409 "enable_quickack": false, 00:19:07.409 "enable_placement_id": 0, 00:19:07.409 "enable_zerocopy_send_server": true, 00:19:07.409 "enable_zerocopy_send_client": false, 00:19:07.409 "zerocopy_threshold": 0, 00:19:07.409 "tls_version": 0, 00:19:07.409 "enable_ktls": false 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "sock_impl_set_options", 00:19:07.409 "params": { 00:19:07.409 "impl_name": "posix", 00:19:07.409 "recv_buf_size": 2097152, 00:19:07.409 "send_buf_size": 2097152, 00:19:07.409 "enable_recv_pipe": true, 00:19:07.409 "enable_quickack": false, 00:19:07.409 "enable_placement_id": 0, 00:19:07.409 "enable_zerocopy_send_server": true, 00:19:07.409 "enable_zerocopy_send_client": false, 00:19:07.409 "zerocopy_threshold": 0, 00:19:07.409 "tls_version": 0, 00:19:07.409 "enable_ktls": false 00:19:07.409 } 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "vmd", 00:19:07.409 "config": [] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "accel", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "accel_set_options", 00:19:07.409 "params": { 00:19:07.409 "small_cache_size": 128, 00:19:07.409 "large_cache_size": 16, 00:19:07.409 "task_count": 2048, 00:19:07.409 "sequence_count": 2048, 00:19:07.409 "buf_count": 2048 00:19:07.409 } 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "bdev", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "bdev_set_options", 00:19:07.409 "params": { 00:19:07.409 "bdev_io_pool_size": 65535, 00:19:07.409 "bdev_io_cache_size": 256, 00:19:07.409 "bdev_auto_examine": true, 00:19:07.409 "iobuf_small_cache_size": 128, 00:19:07.409 "iobuf_large_cache_size": 16 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "bdev_raid_set_options", 00:19:07.409 "params": { 00:19:07.409 "process_window_size_kb": 1024, 00:19:07.409 "process_max_bandwidth_mb_sec": 0 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "bdev_iscsi_set_options", 00:19:07.409 "params": { 00:19:07.409 "timeout_sec": 30 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "bdev_nvme_set_options", 00:19:07.409 "params": { 00:19:07.409 "action_on_timeout": "none", 00:19:07.409 "timeout_us": 0, 00:19:07.409 "timeout_admin_us": 0, 00:19:07.409 "keep_alive_timeout_ms": 10000, 00:19:07.409 "arbitration_burst": 0, 00:19:07.409 "low_priority_weight": 0, 00:19:07.409 "medium_priority_weight": 0, 00:19:07.409 "high_priority_weight": 0, 00:19:07.409 "nvme_adminq_poll_period_us": 10000, 00:19:07.409 "nvme_ioq_poll_period_us": 0, 00:19:07.409 "io_queue_requests": 0, 00:19:07.409 "delay_cmd_submit": true, 00:19:07.409 "transport_retry_count": 4, 00:19:07.409 "bdev_retry_count": 3, 00:19:07.409 "transport_ack_timeout": 0, 00:19:07.409 "ctrlr_loss_timeout_sec": 0, 00:19:07.409 "reconnect_delay_sec": 0, 00:19:07.409 "fast_io_fail_timeout_sec": 0, 00:19:07.409 "disable_auto_failback": false, 00:19:07.409 "generate_uuids": false, 00:19:07.409 "transport_tos": 0, 00:19:07.409 "nvme_error_stat": false, 00:19:07.409 "rdma_srq_size": 0, 00:19:07.409 "io_path_stat": false, 00:19:07.409 "allow_accel_sequence": false, 00:19:07.409 "rdma_max_cq_size": 0, 00:19:07.409 "rdma_cm_event_timeout_ms": 0, 00:19:07.409 "dhchap_digests": [ 00:19:07.409 "sha256", 00:19:07.409 "sha384", 00:19:07.409 "sha512" 00:19:07.409 ], 00:19:07.409 "dhchap_dhgroups": [ 00:19:07.409 "null", 00:19:07.409 "ffdhe2048", 00:19:07.409 "ffdhe3072", 00:19:07.409 "ffdhe4096", 00:19:07.409 "ffdhe6144", 00:19:07.409 "ffdhe8192" 00:19:07.409 ] 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "bdev_nvme_set_hotplug", 00:19:07.409 "params": { 00:19:07.409 "period_us": 100000, 00:19:07.409 "enable": false 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "bdev_malloc_create", 00:19:07.409 "params": { 00:19:07.409 "name": "malloc0", 00:19:07.409 "num_blocks": 8192, 00:19:07.409 "block_size": 4096, 00:19:07.409 "physical_block_size": 4096, 00:19:07.409 "uuid": "00e29cd1-e721-417b-8d22-312b7c2aff2a", 00:19:07.409 "optimal_io_boundary": 0, 00:19:07.409 "md_size": 0, 00:19:07.409 "dif_type": 0, 00:19:07.409 "dif_is_head_of_md": false, 00:19:07.409 "dif_pi_format": 0 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "bdev_wait_for_examine" 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "nbd", 00:19:07.409 "config": [] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "scheduler", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "framework_set_scheduler", 00:19:07.409 "params": { 00:19:07.409 "name": "static" 00:19:07.409 } 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "subsystem": "nvmf", 00:19:07.409 "config": [ 00:19:07.409 { 00:19:07.409 "method": "nvmf_set_config", 00:19:07.409 "params": { 00:19:07.409 "discovery_filter": "match_any", 00:19:07.409 "admin_cmd_passthru": { 00:19:07.409 "identify_ctrlr": false 00:19:07.409 }, 00:19:07.409 "dhchap_digests": [ 00:19:07.409 "sha256", 00:19:07.409 "sha384", 00:19:07.409 "sha512" 00:19:07.409 ], 00:19:07.409 "dhchap_dhgroups": [ 00:19:07.409 "null", 00:19:07.409 "ffdhe2048", 00:19:07.409 "ffdhe3072", 00:19:07.409 "ffdhe4096", 00:19:07.409 "ffdhe6144", 00:19:07.409 "ffdhe8192" 00:19:07.409 ] 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "nvmf_set_max_subsystems", 00:19:07.409 "params": { 00:19:07.409 "max_subsystems": 1024 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "nvmf_set_crdt", 00:19:07.409 "params": { 00:19:07.409 "crdt1": 0, 00:19:07.409 "crdt2": 0, 00:19:07.409 "crdt3": 0 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "nvmf_create_transport", 00:19:07.409 "params": { 00:19:07.409 "trtype": "TCP", 00:19:07.409 "max_queue_depth": 128, 00:19:07.409 "max_io_qpairs_per_ctrlr": 127, 00:19:07.409 "in_capsule_data_size": 4096, 00:19:07.409 "max_io_size": 131072, 00:19:07.409 "io_unit_size": 131072, 00:19:07.409 "max_aq_depth": 128, 00:19:07.409 "num_shared_buffers": 511, 00:19:07.409 "buf_cache_size": 4294967295, 00:19:07.409 "dif_insert_or_strip": false, 00:19:07.409 "zcopy": false, 00:19:07.409 "c2h_success": false, 00:19:07.409 "sock_priority": 0, 00:19:07.409 "abort_timeout_sec": 1, 00:19:07.409 "ack_timeout": 0, 00:19:07.409 "data_wr_pool_size": 0 00:19:07.409 } 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "method": "nvmf_create_subsystem", 00:19:07.409 "params": { 00:19:07.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.410 "allow_any_host": false, 00:19:07.410 "serial_number": "00000000000000000000", 00:19:07.410 "model_number": "SPDK bdev Controller", 00:19:07.410 "max_namespaces": 32, 00:19:07.410 "min_cntlid": 1, 00:19:07.410 "max_cntlid": 65519, 00:19:07.410 "ana_reporting": false 00:19:07.410 } 00:19:07.410 }, 00:19:07.410 { 00:19:07.410 "method": "nvmf_subsystem_add_host", 00:19:07.410 "params": { 00:19:07.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.410 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.410 "psk": "key0" 00:19:07.410 } 00:19:07.410 }, 00:19:07.410 { 00:19:07.410 "method": "nvmf_subsystem_add_ns", 00:19:07.410 "params": { 00:19:07.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.410 "namespace": { 00:19:07.410 "nsid": 1, 00:19:07.410 "bdev_name": "malloc0", 00:19:07.410 "nguid": "00E29CD1E721417B8D22312B7C2AFF2A", 00:19:07.410 "uuid": "00e29cd1-e721-417b-8d22-312b7c2aff2a", 00:19:07.410 "no_auto_visible": false 00:19:07.410 } 00:19:07.410 } 00:19:07.410 }, 00:19:07.410 { 00:19:07.410 "method": "nvmf_subsystem_add_listener", 00:19:07.410 "params": { 00:19:07.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.410 "listen_address": { 00:19:07.410 "trtype": "TCP", 00:19:07.410 "adrfam": "IPv4", 00:19:07.410 "traddr": "10.0.0.2", 00:19:07.410 "trsvcid": "4420" 00:19:07.410 }, 00:19:07.410 "secure_channel": false, 00:19:07.410 "sock_impl": "ssl" 00:19:07.410 } 00:19:07.410 } 00:19:07.410 ] 00:19:07.410 } 00:19:07.410 ] 00:19:07.410 }' 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=649416 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 649416 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 649416 ']' 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.410 04:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.410 [2024-12-10 04:55:58.513450] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:07.410 [2024-12-10 04:55:58.513496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.669 [2024-12-10 04:55:58.588959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.669 [2024-12-10 04:55:58.624015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.669 [2024-12-10 04:55:58.624051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.669 [2024-12-10 04:55:58.624058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.669 [2024-12-10 04:55:58.624064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.669 [2024-12-10 04:55:58.624068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.669 [2024-12-10 04:55:58.624609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.928 [2024-12-10 04:55:58.838377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.928 [2024-12-10 04:55:58.870412] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.928 [2024-12-10 04:55:58.870614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=649657 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 649657 /var/tmp/bdevperf.sock 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 649657 ']' 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.496 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:08.496 "subsystems": [ 00:19:08.496 { 00:19:08.496 "subsystem": "keyring", 00:19:08.496 "config": [ 00:19:08.496 { 00:19:08.496 "method": "keyring_file_add_key", 00:19:08.496 "params": { 00:19:08.496 "name": "key0", 00:19:08.496 "path": "/tmp/tmp.Yw4nntCAEq" 00:19:08.496 } 00:19:08.496 } 00:19:08.496 ] 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "subsystem": "iobuf", 00:19:08.496 "config": [ 00:19:08.496 { 00:19:08.496 "method": "iobuf_set_options", 00:19:08.496 "params": { 00:19:08.496 "small_pool_count": 8192, 00:19:08.496 "large_pool_count": 1024, 00:19:08.496 "small_bufsize": 8192, 00:19:08.496 "large_bufsize": 135168, 00:19:08.496 "enable_numa": false 00:19:08.496 } 00:19:08.496 } 00:19:08.496 ] 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "subsystem": "sock", 00:19:08.496 "config": [ 00:19:08.496 { 00:19:08.496 "method": "sock_set_default_impl", 00:19:08.496 "params": { 00:19:08.496 "impl_name": "posix" 00:19:08.496 } 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "method": "sock_impl_set_options", 00:19:08.496 "params": { 00:19:08.496 "impl_name": "ssl", 00:19:08.496 "recv_buf_size": 4096, 00:19:08.496 "send_buf_size": 4096, 00:19:08.496 "enable_recv_pipe": true, 00:19:08.496 "enable_quickack": false, 00:19:08.496 "enable_placement_id": 0, 00:19:08.496 "enable_zerocopy_send_server": true, 00:19:08.496 "enable_zerocopy_send_client": false, 00:19:08.496 "zerocopy_threshold": 0, 00:19:08.496 "tls_version": 0, 00:19:08.496 "enable_ktls": false 00:19:08.496 } 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "method": "sock_impl_set_options", 00:19:08.496 "params": { 00:19:08.496 "impl_name": "posix", 00:19:08.496 "recv_buf_size": 2097152, 00:19:08.496 "send_buf_size": 2097152, 00:19:08.496 "enable_recv_pipe": true, 00:19:08.496 "enable_quickack": false, 00:19:08.496 "enable_placement_id": 0, 00:19:08.496 "enable_zerocopy_send_server": true, 00:19:08.496 "enable_zerocopy_send_client": false, 00:19:08.496 "zerocopy_threshold": 0, 00:19:08.496 "tls_version": 0, 00:19:08.496 "enable_ktls": false 00:19:08.496 } 00:19:08.496 } 00:19:08.496 ] 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "subsystem": "vmd", 00:19:08.496 "config": [] 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "subsystem": "accel", 00:19:08.496 "config": [ 00:19:08.496 { 00:19:08.496 "method": "accel_set_options", 00:19:08.496 "params": { 00:19:08.496 "small_cache_size": 128, 00:19:08.496 "large_cache_size": 16, 00:19:08.496 "task_count": 2048, 00:19:08.496 "sequence_count": 2048, 00:19:08.496 "buf_count": 2048 00:19:08.496 } 00:19:08.496 } 00:19:08.496 ] 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "subsystem": "bdev", 00:19:08.496 "config": [ 00:19:08.496 { 00:19:08.496 "method": "bdev_set_options", 00:19:08.496 "params": { 00:19:08.496 "bdev_io_pool_size": 65535, 00:19:08.496 "bdev_io_cache_size": 256, 00:19:08.496 "bdev_auto_examine": true, 00:19:08.496 "iobuf_small_cache_size": 128, 00:19:08.496 "iobuf_large_cache_size": 16 00:19:08.496 } 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "method": "bdev_raid_set_options", 00:19:08.496 "params": { 00:19:08.496 "process_window_size_kb": 1024, 00:19:08.496 "process_max_bandwidth_mb_sec": 0 00:19:08.496 } 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "method": "bdev_iscsi_set_options", 00:19:08.496 "params": { 00:19:08.496 "timeout_sec": 30 00:19:08.496 } 00:19:08.496 }, 00:19:08.496 { 00:19:08.496 "method": "bdev_nvme_set_options", 00:19:08.496 "params": { 00:19:08.496 "action_on_timeout": "none", 00:19:08.496 "timeout_us": 0, 00:19:08.496 "timeout_admin_us": 0, 00:19:08.496 "keep_alive_timeout_ms": 10000, 00:19:08.496 "arbitration_burst": 0, 00:19:08.496 "low_priority_weight": 0, 00:19:08.496 "medium_priority_weight": 0, 00:19:08.496 "high_priority_weight": 0, 00:19:08.496 "nvme_adminq_poll_period_us": 10000, 00:19:08.496 "nvme_ioq_poll_period_us": 0, 00:19:08.496 "io_queue_requests": 512, 00:19:08.496 "delay_cmd_submit": true, 00:19:08.496 "transport_retry_count": 4, 00:19:08.496 "bdev_retry_count": 3, 00:19:08.496 "transport_ack_timeout": 0, 00:19:08.496 "ctrlr_loss_timeout_sec": 0, 00:19:08.496 "reconnect_delay_sec": 0, 00:19:08.496 "fast_io_fail_timeout_sec": 0, 00:19:08.496 "disable_auto_failback": false, 00:19:08.496 "generate_uuids": false, 00:19:08.496 "transport_tos": 0, 00:19:08.496 "nvme_error_stat": false, 00:19:08.496 "rdma_srq_size": 0, 00:19:08.496 "io_path_stat": false, 00:19:08.496 "allow_accel_sequence": false, 00:19:08.496 "rdma_max_cq_size": 0, 00:19:08.496 "rdma_cm_event_timeout_ms": 0, 00:19:08.496 "dhchap_digests": [ 00:19:08.496 "sha256", 00:19:08.496 "sha384", 00:19:08.496 "sha512" 00:19:08.496 ], 00:19:08.496 "dhchap_dhgroups": [ 00:19:08.496 "null", 00:19:08.497 "ffdhe2048", 00:19:08.497 "ffdhe3072", 00:19:08.497 "ffdhe4096", 00:19:08.497 "ffdhe6144", 00:19:08.497 "ffdhe8192" 00:19:08.497 ] 00:19:08.497 } 00:19:08.497 }, 00:19:08.497 { 00:19:08.497 "method": "bdev_nvme_attach_controller", 00:19:08.497 "params": { 00:19:08.497 "name": "nvme0", 00:19:08.497 "trtype": "TCP", 00:19:08.497 "adrfam": "IPv4", 00:19:08.497 "traddr": "10.0.0.2", 00:19:08.497 "trsvcid": "4420", 00:19:08.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.497 "prchk_reftag": false, 00:19:08.497 "prchk_guard": false, 00:19:08.497 "ctrlr_loss_timeout_sec": 0, 00:19:08.497 "reconnect_delay_sec": 0, 00:19:08.497 "fast_io_fail_timeout_sec": 0, 00:19:08.497 "psk": "key0", 00:19:08.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.497 "hdgst": false, 00:19:08.497 "ddgst": false, 00:19:08.497 "multipath": "multipath" 00:19:08.497 } 00:19:08.497 }, 00:19:08.497 { 00:19:08.497 "method": "bdev_nvme_set_hotplug", 00:19:08.497 "params": { 00:19:08.497 "period_us": 100000, 00:19:08.497 "enable": false 00:19:08.497 } 00:19:08.497 }, 00:19:08.497 { 00:19:08.497 "method": "bdev_enable_histogram", 00:19:08.497 "params": { 00:19:08.497 "name": "nvme0n1", 00:19:08.497 "enable": true 00:19:08.497 } 00:19:08.497 }, 00:19:08.497 { 00:19:08.497 "method": "bdev_wait_for_examine" 00:19:08.497 } 00:19:08.497 ] 00:19:08.497 }, 00:19:08.497 { 00:19:08.497 "subsystem": "nbd", 00:19:08.497 "config": [] 00:19:08.497 } 00:19:08.497 ] 00:19:08.497 }' 00:19:08.497 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.497 04:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.497 [2024-12-10 04:55:59.426036] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:08.497 [2024-12-10 04:55:59.426080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649657 ] 00:19:08.497 [2024-12-10 04:55:59.498159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.497 [2024-12-10 04:55:59.538373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.756 [2024-12-10 04:55:59.690749] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.323 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.323 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.323 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.323 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:09.581 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.581 04:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.581 Running I/O for 1 seconds... 00:19:10.517 5464.00 IOPS, 21.34 MiB/s 00:19:10.517 Latency(us) 00:19:10.517 [2024-12-10T03:56:01.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.517 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:10.517 Verification LBA range: start 0x0 length 0x2000 00:19:10.517 nvme0n1 : 1.01 5520.32 21.56 0.00 0.00 23025.80 5211.67 22719.15 00:19:10.517 [2024-12-10T03:56:01.654Z] =================================================================================================================== 00:19:10.517 [2024-12-10T03:56:01.655Z] Total : 5520.32 21.56 0.00 0.00 23025.80 5211.67 22719.15 00:19:10.518 { 00:19:10.518 "results": [ 00:19:10.518 { 00:19:10.518 "job": "nvme0n1", 00:19:10.518 "core_mask": "0x2", 00:19:10.518 "workload": "verify", 00:19:10.518 "status": "finished", 00:19:10.518 "verify_range": { 00:19:10.518 "start": 0, 00:19:10.518 "length": 8192 00:19:10.518 }, 00:19:10.518 "queue_depth": 128, 00:19:10.518 "io_size": 4096, 00:19:10.518 "runtime": 1.012984, 00:19:10.518 "iops": 5520.324111733255, 00:19:10.518 "mibps": 21.56376606145803, 00:19:10.518 "io_failed": 0, 00:19:10.518 "io_timeout": 0, 00:19:10.518 "avg_latency_us": 23025.79556645548, 00:19:10.518 "min_latency_us": 5211.672380952381, 00:19:10.518 "max_latency_us": 22719.146666666667 00:19:10.518 } 00:19:10.518 ], 00:19:10.518 "core_count": 1 00:19:10.518 } 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:10.518 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:10.518 nvmf_trace.0 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 649657 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 649657 ']' 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 649657 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649657 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649657' 00:19:10.777 killing process with pid 649657 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 649657 00:19:10.777 Received shutdown signal, test time was about 1.000000 seconds 00:19:10.777 00:19:10.777 Latency(us) 00:19:10.777 [2024-12-10T03:56:01.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.777 [2024-12-10T03:56:01.914Z] =================================================================================================================== 00:19:10.777 [2024-12-10T03:56:01.914Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.777 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 649657 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.036 rmmod nvme_tcp 00:19:11.036 rmmod nvme_fabrics 00:19:11.036 rmmod nvme_keyring 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 649416 ']' 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 649416 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 649416 ']' 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 649416 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.036 04:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649416 00:19:11.036 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.036 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.036 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649416' 00:19:11.036 killing process with pid 649416 00:19:11.036 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 649416 00:19:11.036 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 649416 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.296 04:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.E3v8zS9Dgy /tmp/tmp.z32TWrbelL /tmp/tmp.Yw4nntCAEq 00:19:13.202 00:19:13.202 real 1m19.554s 00:19:13.202 user 2m2.052s 00:19:13.202 sys 0m30.078s 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.202 ************************************ 00:19:13.202 END TEST nvmf_tls 00:19:13.202 ************************************ 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.202 04:56:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.467 ************************************ 00:19:13.467 START TEST nvmf_fips 00:19:13.467 ************************************ 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:13.467 * Looking for test storage... 00:19:13.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:13.467 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.468 --rc genhtml_branch_coverage=1 00:19:13.468 --rc genhtml_function_coverage=1 00:19:13.468 --rc genhtml_legend=1 00:19:13.468 --rc geninfo_all_blocks=1 00:19:13.468 --rc geninfo_unexecuted_blocks=1 00:19:13.468 00:19:13.468 ' 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.468 --rc genhtml_branch_coverage=1 00:19:13.468 --rc genhtml_function_coverage=1 00:19:13.468 --rc genhtml_legend=1 00:19:13.468 --rc geninfo_all_blocks=1 00:19:13.468 --rc geninfo_unexecuted_blocks=1 00:19:13.468 00:19:13.468 ' 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.468 --rc genhtml_branch_coverage=1 00:19:13.468 --rc genhtml_function_coverage=1 00:19:13.468 --rc genhtml_legend=1 00:19:13.468 --rc geninfo_all_blocks=1 00:19:13.468 --rc geninfo_unexecuted_blocks=1 00:19:13.468 00:19:13.468 ' 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.468 --rc genhtml_branch_coverage=1 00:19:13.468 --rc genhtml_function_coverage=1 00:19:13.468 --rc genhtml_legend=1 00:19:13.468 --rc geninfo_all_blocks=1 00:19:13.468 --rc geninfo_unexecuted_blocks=1 00:19:13.468 00:19:13.468 ' 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.468 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:13.469 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.470 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:13.733 Error setting digest 00:19:13.733 40D280403F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:13.733 40D280403F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.733 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.734 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.734 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.734 04:56:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:20.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:20.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.309 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:20.309 Found net devices under 0000:af:00.0: cvl_0_0 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:20.310 Found net devices under 0000:af:00.1: cvl_0_1 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:19:20.310 00:19:20.310 --- 10.0.0.2 ping statistics --- 00:19:20.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.310 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:19:20.310 00:19:20.310 --- 10.0.0.1 ping statistics --- 00:19:20.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.310 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=653599 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 653599 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 653599 ']' 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.310 04:56:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.310 [2024-12-10 04:56:10.680245] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:20.310 [2024-12-10 04:56:10.680291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.310 [2024-12-10 04:56:10.757743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.310 [2024-12-10 04:56:10.798259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.310 [2024-12-10 04:56:10.798294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.310 [2024-12-10 04:56:10.798302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.310 [2024-12-10 04:56:10.798308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.310 [2024-12-10 04:56:10.798312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.310 [2024-12-10 04:56:10.798818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.JBT 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.JBT 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.JBT 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.JBT 00:19:20.569 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:20.829 [2024-12-10 04:56:11.720272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.829 [2024-12-10 04:56:11.736281] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.829 [2024-12-10 04:56:11.736482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.829 malloc0 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=653845 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 653845 /var/tmp/bdevperf.sock 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 653845 ']' 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.829 04:56:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.829 [2024-12-10 04:56:11.864638] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:20.829 [2024-12-10 04:56:11.864689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653845 ] 00:19:20.829 [2024-12-10 04:56:11.939597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.088 [2024-12-10 04:56:11.979952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.656 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.656 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:21.656 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.JBT 00:19:21.914 04:56:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.173 [2024-12-10 04:56:13.073260] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.173 TLSTESTn1 00:19:22.173 04:56:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.173 Running I/O for 10 seconds... 00:19:24.487 5497.00 IOPS, 21.47 MiB/s [2024-12-10T03:56:16.560Z] 5492.00 IOPS, 21.45 MiB/s [2024-12-10T03:56:17.496Z] 5494.67 IOPS, 21.46 MiB/s [2024-12-10T03:56:18.433Z] 5512.75 IOPS, 21.53 MiB/s [2024-12-10T03:56:19.369Z] 5514.20 IOPS, 21.54 MiB/s [2024-12-10T03:56:20.306Z] 5520.17 IOPS, 21.56 MiB/s [2024-12-10T03:56:21.321Z] 5522.57 IOPS, 21.57 MiB/s [2024-12-10T03:56:22.333Z] 5536.75 IOPS, 21.63 MiB/s [2024-12-10T03:56:23.711Z] 5540.00 IOPS, 21.64 MiB/s [2024-12-10T03:56:23.711Z] 5555.70 IOPS, 21.70 MiB/s 00:19:32.574 Latency(us) 00:19:32.574 [2024-12-10T03:56:23.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.574 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.574 Verification LBA range: start 0x0 length 0x2000 00:19:32.574 TLSTESTn1 : 10.02 5559.01 21.71 0.00 0.00 22991.08 6335.15 22344.66 00:19:32.574 [2024-12-10T03:56:23.711Z] =================================================================================================================== 00:19:32.574 [2024-12-10T03:56:23.711Z] Total : 5559.01 21.71 0.00 0.00 22991.08 6335.15 22344.66 00:19:32.574 { 00:19:32.574 "results": [ 00:19:32.574 { 00:19:32.574 "job": "TLSTESTn1", 00:19:32.574 "core_mask": "0x4", 00:19:32.574 "workload": "verify", 00:19:32.574 "status": "finished", 00:19:32.574 "verify_range": { 00:19:32.574 "start": 0, 00:19:32.574 "length": 8192 00:19:32.574 }, 00:19:32.574 "queue_depth": 128, 00:19:32.574 "io_size": 4096, 00:19:32.574 "runtime": 10.016719, 00:19:32.574 "iops": 5559.005898039069, 00:19:32.574 "mibps": 21.71486678921511, 00:19:32.574 "io_failed": 0, 00:19:32.574 "io_timeout": 0, 00:19:32.574 "avg_latency_us": 22991.08472419128, 00:19:32.574 "min_latency_us": 6335.1466666666665, 00:19:32.574 "max_latency_us": 22344.655238095238 00:19:32.574 } 00:19:32.574 ], 00:19:32.574 "core_count": 1 00:19:32.574 } 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:32.574 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:32.575 nvmf_trace.0 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 653845 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 653845 ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 653845 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653845 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653845' 00:19:32.575 killing process with pid 653845 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 653845 00:19:32.575 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.575 00:19:32.575 Latency(us) 00:19:32.575 [2024-12-10T03:56:23.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.575 [2024-12-10T03:56:23.712Z] =================================================================================================================== 00:19:32.575 [2024-12-10T03:56:23.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 653845 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.575 rmmod nvme_tcp 00:19:32.575 rmmod nvme_fabrics 00:19:32.575 rmmod nvme_keyring 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 653599 ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 653599 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 653599 ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 653599 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.575 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653599 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653599' 00:19:32.834 killing process with pid 653599 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 653599 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 653599 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.834 04:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.370 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:35.370 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.JBT 00:19:35.370 00:19:35.370 real 0m21.618s 00:19:35.370 user 0m23.572s 00:19:35.370 sys 0m9.490s 00:19:35.370 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.370 04:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:35.370 ************************************ 00:19:35.370 END TEST nvmf_fips 00:19:35.370 ************************************ 00:19:35.370 04:56:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:35.370 04:56:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:35.370 04:56:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.370 04:56:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.370 ************************************ 00:19:35.370 START TEST nvmf_control_msg_list 00:19:35.370 ************************************ 00:19:35.370 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:35.370 * Looking for test storage... 00:19:35.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.371 --rc genhtml_branch_coverage=1 00:19:35.371 --rc genhtml_function_coverage=1 00:19:35.371 --rc genhtml_legend=1 00:19:35.371 --rc geninfo_all_blocks=1 00:19:35.371 --rc geninfo_unexecuted_blocks=1 00:19:35.371 00:19:35.371 ' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.371 --rc genhtml_branch_coverage=1 00:19:35.371 --rc genhtml_function_coverage=1 00:19:35.371 --rc genhtml_legend=1 00:19:35.371 --rc geninfo_all_blocks=1 00:19:35.371 --rc geninfo_unexecuted_blocks=1 00:19:35.371 00:19:35.371 ' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.371 --rc genhtml_branch_coverage=1 00:19:35.371 --rc genhtml_function_coverage=1 00:19:35.371 --rc genhtml_legend=1 00:19:35.371 --rc geninfo_all_blocks=1 00:19:35.371 --rc geninfo_unexecuted_blocks=1 00:19:35.371 00:19:35.371 ' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.371 --rc genhtml_branch_coverage=1 00:19:35.371 --rc genhtml_function_coverage=1 00:19:35.371 --rc genhtml_legend=1 00:19:35.371 --rc geninfo_all_blocks=1 00:19:35.371 --rc geninfo_unexecuted_blocks=1 00:19:35.371 00:19:35.371 ' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.371 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.372 04:56:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:41.941 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:41.942 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:41.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:41.942 Found net devices under 0000:af:00.0: cvl_0_0 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:41.942 Found net devices under 0000:af:00.1: cvl_0_1 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:41.942 04:56:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:41.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:19:41.942 00:19:41.942 --- 10.0.0.2 ping statistics --- 00:19:41.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.942 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:41.942 00:19:41.942 --- 10.0.0.1 ping statistics --- 00:19:41.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.942 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.942 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=659121 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 659121 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 659121 ']' 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 [2024-12-10 04:56:32.194344] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:41.943 [2024-12-10 04:56:32.194386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.943 [2024-12-10 04:56:32.272709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.943 [2024-12-10 04:56:32.311382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.943 [2024-12-10 04:56:32.311416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.943 [2024-12-10 04:56:32.311423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.943 [2024-12-10 04:56:32.311429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.943 [2024-12-10 04:56:32.311433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.943 [2024-12-10 04:56:32.311915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 [2024-12-10 04:56:32.450572] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 Malloc0 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.943 [2024-12-10 04:56:32.490728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=659248 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=659249 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=659250 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 659248 00:19:41.943 04:56:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.943 [2024-12-10 04:56:32.579376] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.943 [2024-12-10 04:56:32.579554] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.943 [2024-12-10 04:56:32.579704] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:42.880 Initializing NVMe Controllers 00:19:42.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:42.880 Initialization complete. Launching workers. 00:19:42.880 ======================================================== 00:19:42.880 Latency(us) 00:19:42.880 Device Information : IOPS MiB/s Average min max 00:19:42.880 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 24.00 0.09 41702.92 40870.94 41938.03 00:19:42.880 ======================================================== 00:19:42.880 Total : 24.00 0.09 41702.92 40870.94 41938.03 00:19:42.880 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 659249 00:19:42.880 Initializing NVMe Controllers 00:19:42.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:42.880 Initialization complete. Launching workers. 00:19:42.880 ======================================================== 00:19:42.880 Latency(us) 00:19:42.880 Device Information : IOPS MiB/s Average min max 00:19:42.880 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4820.00 18.83 207.10 149.75 406.16 00:19:42.880 ======================================================== 00:19:42.880 Total : 4820.00 18.83 207.10 149.75 406.16 00:19:42.880 00:19:42.880 Initializing NVMe Controllers 00:19:42.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:42.880 Initialization complete. Launching workers. 00:19:42.880 ======================================================== 00:19:42.880 Latency(us) 00:19:42.880 Device Information : IOPS MiB/s Average min max 00:19:42.880 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41491.86 40814.95 41933.90 00:19:42.880 ======================================================== 00:19:42.880 Total : 25.00 0.10 41491.86 40814.95 41933.90 00:19:42.880 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 659250 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.880 rmmod nvme_tcp 00:19:42.880 rmmod nvme_fabrics 00:19:42.880 rmmod nvme_keyring 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.880 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 659121 ']' 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 659121 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 659121 ']' 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 659121 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659121 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659121' 00:19:42.881 killing process with pid 659121 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 659121 00:19:42.881 04:56:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 659121 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.140 04:56:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.675 00:19:45.675 real 0m10.168s 00:19:45.675 user 0m6.858s 00:19:45.675 sys 0m5.354s 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.675 ************************************ 00:19:45.675 END TEST nvmf_control_msg_list 00:19:45.675 ************************************ 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.675 ************************************ 00:19:45.675 START TEST nvmf_wait_for_buf 00:19:45.675 ************************************ 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:45.675 * Looking for test storage... 00:19:45.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.675 --rc genhtml_branch_coverage=1 00:19:45.675 --rc genhtml_function_coverage=1 00:19:45.675 --rc genhtml_legend=1 00:19:45.675 --rc geninfo_all_blocks=1 00:19:45.675 --rc geninfo_unexecuted_blocks=1 00:19:45.675 00:19:45.675 ' 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.675 --rc genhtml_branch_coverage=1 00:19:45.675 --rc genhtml_function_coverage=1 00:19:45.675 --rc genhtml_legend=1 00:19:45.675 --rc geninfo_all_blocks=1 00:19:45.675 --rc geninfo_unexecuted_blocks=1 00:19:45.675 00:19:45.675 ' 00:19:45.675 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.675 --rc genhtml_branch_coverage=1 00:19:45.675 --rc genhtml_function_coverage=1 00:19:45.675 --rc genhtml_legend=1 00:19:45.676 --rc geninfo_all_blocks=1 00:19:45.676 --rc geninfo_unexecuted_blocks=1 00:19:45.676 00:19:45.676 ' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.676 --rc genhtml_branch_coverage=1 00:19:45.676 --rc genhtml_function_coverage=1 00:19:45.676 --rc genhtml_legend=1 00:19:45.676 --rc geninfo_all_blocks=1 00:19:45.676 --rc geninfo_unexecuted_blocks=1 00:19:45.676 00:19:45.676 ' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.676 04:56:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:52.246 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:52.246 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:52.246 Found net devices under 0000:af:00.0: cvl_0_0 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.246 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:52.247 Found net devices under 0000:af:00.1: cvl_0_1 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:52.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:19:52.247 00:19:52.247 --- 10.0.0.2 ping statistics --- 00:19:52.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.247 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:19:52.247 00:19:52.247 --- 10.0.0.1 ping statistics --- 00:19:52.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.247 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=662963 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 662963 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 662963 ']' 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 [2024-12-10 04:56:42.451264] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:52.247 [2024-12-10 04:56:42.451308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.247 [2024-12-10 04:56:42.527222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.247 [2024-12-10 04:56:42.566692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.247 [2024-12-10 04:56:42.566725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.247 [2024-12-10 04:56:42.566732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.247 [2024-12-10 04:56:42.566738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.247 [2024-12-10 04:56:42.566743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.247 [2024-12-10 04:56:42.567241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 Malloc0 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.247 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.248 [2024-12-10 04:56:42.745015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.248 [2024-12-10 04:56:42.773222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.248 04:56:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.248 [2024-12-10 04:56:42.860257] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:53.625 Initializing NVMe Controllers 00:19:53.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:53.625 Initialization complete. Launching workers. 00:19:53.625 ======================================================== 00:19:53.625 Latency(us) 00:19:53.625 Device Information : IOPS MiB/s Average min max 00:19:53.626 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.82 7253.01 63850.05 00:19:53.626 ======================================================== 00:19:53.626 Total : 129.00 16.12 32238.82 7253.01 63850.05 00:19:53.626 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:53.626 rmmod nvme_tcp 00:19:53.626 rmmod nvme_fabrics 00:19:53.626 rmmod nvme_keyring 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 662963 ']' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 662963 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 662963 ']' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 662963 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662963 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662963' 00:19:53.626 killing process with pid 662963 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 662963 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 662963 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.626 04:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.160 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.160 00:19:56.160 real 0m10.480s 00:19:56.160 user 0m4.061s 00:19:56.160 sys 0m4.866s 00:19:56.160 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.160 04:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.160 ************************************ 00:19:56.160 END TEST nvmf_wait_for_buf 00:19:56.160 ************************************ 00:19:56.161 04:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:56.161 04:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:56.161 04:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:56.161 04:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:56.161 04:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.161 04:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.436 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:01.437 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:01.437 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:01.437 Found net devices under 0000:af:00.0: cvl_0_0 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:01.437 Found net devices under 0000:af:00.1: cvl_0_1 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.437 ************************************ 00:20:01.437 START TEST nvmf_perf_adq 00:20:01.437 ************************************ 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.437 * Looking for test storage... 00:20:01.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:01.437 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.697 --rc genhtml_branch_coverage=1 00:20:01.697 --rc genhtml_function_coverage=1 00:20:01.697 --rc genhtml_legend=1 00:20:01.697 --rc geninfo_all_blocks=1 00:20:01.697 --rc geninfo_unexecuted_blocks=1 00:20:01.697 00:20:01.697 ' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.697 --rc genhtml_branch_coverage=1 00:20:01.697 --rc genhtml_function_coverage=1 00:20:01.697 --rc genhtml_legend=1 00:20:01.697 --rc geninfo_all_blocks=1 00:20:01.697 --rc geninfo_unexecuted_blocks=1 00:20:01.697 00:20:01.697 ' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.697 --rc genhtml_branch_coverage=1 00:20:01.697 --rc genhtml_function_coverage=1 00:20:01.697 --rc genhtml_legend=1 00:20:01.697 --rc geninfo_all_blocks=1 00:20:01.697 --rc geninfo_unexecuted_blocks=1 00:20:01.697 00:20:01.697 ' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:01.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.697 --rc genhtml_branch_coverage=1 00:20:01.697 --rc genhtml_function_coverage=1 00:20:01.697 --rc genhtml_legend=1 00:20:01.697 --rc geninfo_all_blocks=1 00:20:01.697 --rc geninfo_unexecuted_blocks=1 00:20:01.697 00:20:01.697 ' 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.697 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.698 04:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:08.269 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:08.269 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:08.269 Found net devices under 0000:af:00.0: cvl_0_0 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.269 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:08.270 Found net devices under 0000:af:00.1: cvl_0_1 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:08.270 04:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:08.529 04:56:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:11.066 04:57:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:16.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:16.343 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.343 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:16.344 Found net devices under 0000:af:00.0: cvl_0_0 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:16.344 Found net devices under 0000:af:00.1: cvl_0_1 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:16.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:20:16.344 00:20:16.344 --- 10.0.0.2 ping statistics --- 00:20:16.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.344 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:20:16.344 00:20:16.344 --- 10.0.0.1 ping statistics --- 00:20:16.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.344 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=671802 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 671802 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 671802 ']' 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.344 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.344 [2024-12-10 04:57:07.470592] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:16.344 [2024-12-10 04:57:07.470635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.604 [2024-12-10 04:57:07.550738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.604 [2024-12-10 04:57:07.592600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.604 [2024-12-10 04:57:07.592638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.604 [2024-12-10 04:57:07.592645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.604 [2024-12-10 04:57:07.592651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.604 [2024-12-10 04:57:07.592656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.604 [2024-12-10 04:57:07.594089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.604 [2024-12-10 04:57:07.594217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.604 [2024-12-10 04:57:07.594107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.604 [2024-12-10 04:57:07.594217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.604 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 [2024-12-10 04:57:07.823151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 Malloc1 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.864 [2024-12-10 04:57:07.881202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=672083 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:16.864 04:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:18.776 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:18.776 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.776 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:19.040 "tick_rate": 2100000000, 00:20:19.040 "poll_groups": [ 00:20:19.040 { 00:20:19.040 "name": "nvmf_tgt_poll_group_000", 00:20:19.040 "admin_qpairs": 1, 00:20:19.040 "io_qpairs": 1, 00:20:19.040 "current_admin_qpairs": 1, 00:20:19.040 "current_io_qpairs": 1, 00:20:19.040 "pending_bdev_io": 0, 00:20:19.040 "completed_nvme_io": 20480, 00:20:19.040 "transports": [ 00:20:19.040 { 00:20:19.040 "trtype": "TCP" 00:20:19.040 } 00:20:19.040 ] 00:20:19.040 }, 00:20:19.040 { 00:20:19.040 "name": "nvmf_tgt_poll_group_001", 00:20:19.040 "admin_qpairs": 0, 00:20:19.040 "io_qpairs": 1, 00:20:19.040 "current_admin_qpairs": 0, 00:20:19.040 "current_io_qpairs": 1, 00:20:19.040 "pending_bdev_io": 0, 00:20:19.040 "completed_nvme_io": 20296, 00:20:19.040 "transports": [ 00:20:19.040 { 00:20:19.040 "trtype": "TCP" 00:20:19.040 } 00:20:19.040 ] 00:20:19.040 }, 00:20:19.040 { 00:20:19.040 "name": "nvmf_tgt_poll_group_002", 00:20:19.040 "admin_qpairs": 0, 00:20:19.040 "io_qpairs": 1, 00:20:19.040 "current_admin_qpairs": 0, 00:20:19.040 "current_io_qpairs": 1, 00:20:19.040 "pending_bdev_io": 0, 00:20:19.040 "completed_nvme_io": 20811, 00:20:19.040 "transports": [ 00:20:19.040 { 00:20:19.040 "trtype": "TCP" 00:20:19.040 } 00:20:19.040 ] 00:20:19.040 }, 00:20:19.040 { 00:20:19.040 "name": "nvmf_tgt_poll_group_003", 00:20:19.040 "admin_qpairs": 0, 00:20:19.040 "io_qpairs": 1, 00:20:19.040 "current_admin_qpairs": 0, 00:20:19.040 "current_io_qpairs": 1, 00:20:19.040 "pending_bdev_io": 0, 00:20:19.040 "completed_nvme_io": 20084, 00:20:19.040 "transports": [ 00:20:19.040 { 00:20:19.040 "trtype": "TCP" 00:20:19.040 } 00:20:19.040 ] 00:20:19.040 } 00:20:19.040 ] 00:20:19.040 }' 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:19.040 04:57:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 672083 00:20:27.194 Initializing NVMe Controllers 00:20:27.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:27.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:27.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:27.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:27.194 Initialization complete. Launching workers. 00:20:27.194 ======================================================== 00:20:27.194 Latency(us) 00:20:27.194 Device Information : IOPS MiB/s Average min max 00:20:27.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10983.70 42.91 5826.08 1888.64 10331.11 00:20:27.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10840.80 42.35 5903.07 2447.42 9998.90 00:20:27.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10881.40 42.51 5881.28 2345.44 10112.43 00:20:27.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10732.30 41.92 5963.76 2346.27 9438.74 00:20:27.194 ======================================================== 00:20:27.194 Total : 43438.20 169.68 5893.14 1888.64 10331.11 00:20:27.194 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.194 rmmod nvme_tcp 00:20:27.194 rmmod nvme_fabrics 00:20:27.194 rmmod nvme_keyring 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 671802 ']' 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 671802 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 671802 ']' 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 671802 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 671802 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 671802' 00:20:27.194 killing process with pid 671802 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 671802 00:20:27.194 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 671802 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.548 04:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.455 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.455 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:29.455 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:29.455 04:57:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:30.834 04:57:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:33.372 04:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.652 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:38.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:38.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:38.653 Found net devices under 0000:af:00.0: cvl_0_0 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:38.653 Found net devices under 0000:af:00.1: cvl_0_1 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.653 04:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.811 ms 00:20:38.653 00:20:38.653 --- 10.0.0.2 ping statistics --- 00:20:38.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.653 rtt min/avg/max/mdev = 0.811/0.811/0.811/0.000 ms 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:20:38.653 00:20:38.653 --- 10.0.0.1 ping statistics --- 00:20:38.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.653 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:38.653 net.core.busy_poll = 1 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:38.653 net.core.busy_read = 1 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:38.653 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=675921 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 675921 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 675921 ']' 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.654 04:57:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.654 [2024-12-10 04:57:29.558699] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:38.654 [2024-12-10 04:57:29.558753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.654 [2024-12-10 04:57:29.636418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.654 [2024-12-10 04:57:29.675595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.654 [2024-12-10 04:57:29.675635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.654 [2024-12-10 04:57:29.675641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.654 [2024-12-10 04:57:29.675647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.654 [2024-12-10 04:57:29.675652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.654 [2024-12-10 04:57:29.676925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.654 [2024-12-10 04:57:29.677036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.654 [2024-12-10 04:57:29.677119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.654 [2024-12-10 04:57:29.677120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 [2024-12-10 04:57:30.576141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 Malloc1 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.592 [2024-12-10 04:57:30.637907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=676168 00:20:39.592 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:39.593 04:57:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:42.129 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:42.129 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.129 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.129 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.129 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:42.129 "tick_rate": 2100000000, 00:20:42.129 "poll_groups": [ 00:20:42.129 { 00:20:42.129 "name": "nvmf_tgt_poll_group_000", 00:20:42.129 "admin_qpairs": 1, 00:20:42.129 "io_qpairs": 2, 00:20:42.129 "current_admin_qpairs": 1, 00:20:42.130 "current_io_qpairs": 2, 00:20:42.130 "pending_bdev_io": 0, 00:20:42.130 "completed_nvme_io": 27972, 00:20:42.130 "transports": [ 00:20:42.130 { 00:20:42.130 "trtype": "TCP" 00:20:42.130 } 00:20:42.130 ] 00:20:42.130 }, 00:20:42.130 { 00:20:42.130 "name": "nvmf_tgt_poll_group_001", 00:20:42.130 "admin_qpairs": 0, 00:20:42.130 "io_qpairs": 2, 00:20:42.130 "current_admin_qpairs": 0, 00:20:42.130 "current_io_qpairs": 2, 00:20:42.130 "pending_bdev_io": 0, 00:20:42.130 "completed_nvme_io": 29230, 00:20:42.130 "transports": [ 00:20:42.130 { 00:20:42.130 "trtype": "TCP" 00:20:42.130 } 00:20:42.130 ] 00:20:42.130 }, 00:20:42.130 { 00:20:42.130 "name": "nvmf_tgt_poll_group_002", 00:20:42.130 "admin_qpairs": 0, 00:20:42.130 "io_qpairs": 0, 00:20:42.130 "current_admin_qpairs": 0, 00:20:42.130 "current_io_qpairs": 0, 00:20:42.130 "pending_bdev_io": 0, 00:20:42.130 "completed_nvme_io": 0, 00:20:42.130 "transports": [ 00:20:42.130 { 00:20:42.130 "trtype": "TCP" 00:20:42.130 } 00:20:42.130 ] 00:20:42.130 }, 00:20:42.130 { 00:20:42.130 "name": "nvmf_tgt_poll_group_003", 00:20:42.130 "admin_qpairs": 0, 00:20:42.130 "io_qpairs": 0, 00:20:42.130 "current_admin_qpairs": 0, 00:20:42.130 "current_io_qpairs": 0, 00:20:42.130 "pending_bdev_io": 0, 00:20:42.130 "completed_nvme_io": 0, 00:20:42.130 "transports": [ 00:20:42.130 { 00:20:42.130 "trtype": "TCP" 00:20:42.130 } 00:20:42.130 ] 00:20:42.130 } 00:20:42.130 ] 00:20:42.130 }' 00:20:42.130 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:42.130 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:42.130 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:42.130 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:42.130 04:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 676168 00:20:50.254 Initializing NVMe Controllers 00:20:50.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:50.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:50.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:50.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:50.254 Initialization complete. Launching workers. 00:20:50.254 ======================================================== 00:20:50.255 Latency(us) 00:20:50.255 Device Information : IOPS MiB/s Average min max 00:20:50.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7810.59 30.51 8195.84 1413.81 52292.40 00:20:50.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7404.09 28.92 8646.14 1434.16 52505.35 00:20:50.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7521.49 29.38 8508.64 1557.30 52633.30 00:20:50.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7497.29 29.29 8573.11 1175.09 51839.95 00:20:50.255 ======================================================== 00:20:50.255 Total : 30233.46 118.10 8477.49 1175.09 52633.30 00:20:50.255 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.255 rmmod nvme_tcp 00:20:50.255 rmmod nvme_fabrics 00:20:50.255 rmmod nvme_keyring 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 675921 ']' 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 675921 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 675921 ']' 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 675921 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675921 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675921' 00:20:50.255 killing process with pid 675921 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 675921 00:20:50.255 04:57:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 675921 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.255 04:57:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:53.548 00:20:53.548 real 0m51.793s 00:20:53.548 user 2m47.155s 00:20:53.548 sys 0m10.209s 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.548 ************************************ 00:20:53.548 END TEST nvmf_perf_adq 00:20:53.548 ************************************ 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.548 ************************************ 00:20:53.548 START TEST nvmf_shutdown 00:20:53.548 ************************************ 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:53.548 * Looking for test storage... 00:20:53.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:53.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.548 --rc genhtml_branch_coverage=1 00:20:53.548 --rc genhtml_function_coverage=1 00:20:53.548 --rc genhtml_legend=1 00:20:53.548 --rc geninfo_all_blocks=1 00:20:53.548 --rc geninfo_unexecuted_blocks=1 00:20:53.548 00:20:53.548 ' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:53.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.548 --rc genhtml_branch_coverage=1 00:20:53.548 --rc genhtml_function_coverage=1 00:20:53.548 --rc genhtml_legend=1 00:20:53.548 --rc geninfo_all_blocks=1 00:20:53.548 --rc geninfo_unexecuted_blocks=1 00:20:53.548 00:20:53.548 ' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:53.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.548 --rc genhtml_branch_coverage=1 00:20:53.548 --rc genhtml_function_coverage=1 00:20:53.548 --rc genhtml_legend=1 00:20:53.548 --rc geninfo_all_blocks=1 00:20:53.548 --rc geninfo_unexecuted_blocks=1 00:20:53.548 00:20:53.548 ' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:53.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.548 --rc genhtml_branch_coverage=1 00:20:53.548 --rc genhtml_function_coverage=1 00:20:53.548 --rc genhtml_legend=1 00:20:53.548 --rc geninfo_all_blocks=1 00:20:53.548 --rc geninfo_unexecuted_blocks=1 00:20:53.548 00:20:53.548 ' 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.548 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:53.549 ************************************ 00:20:53.549 START TEST nvmf_shutdown_tc1 00:20:53.549 ************************************ 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.549 04:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:00.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:00.132 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:00.132 Found net devices under 0000:af:00.0: cvl_0_0 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:00.132 Found net devices under 0000:af:00.1: cvl_0_1 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.132 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:21:00.133 00:21:00.133 --- 10.0.0.2 ping statistics --- 00:21:00.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.133 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:21:00.133 00:21:00.133 --- 10.0.0.1 ping statistics --- 00:21:00.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.133 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=681516 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 681516 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 681516 ']' 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.133 [2024-12-10 04:57:50.580782] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:00.133 [2024-12-10 04:57:50.580835] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.133 [2024-12-10 04:57:50.660135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.133 [2024-12-10 04:57:50.701857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.133 [2024-12-10 04:57:50.701893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.133 [2024-12-10 04:57:50.701900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.133 [2024-12-10 04:57:50.701906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.133 [2024-12-10 04:57:50.701911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.133 [2024-12-10 04:57:50.703371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.133 [2024-12-10 04:57:50.703456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.133 [2024-12-10 04:57:50.703562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.133 [2024-12-10 04:57:50.703563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.133 [2024-12-10 04:57:50.840992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.133 04:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.133 Malloc1 00:21:00.133 [2024-12-10 04:57:50.954947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.133 Malloc2 00:21:00.133 Malloc3 00:21:00.133 Malloc4 00:21:00.133 Malloc5 00:21:00.133 Malloc6 00:21:00.133 Malloc7 00:21:00.133 Malloc8 00:21:00.393 Malloc9 00:21:00.393 Malloc10 00:21:00.393 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.393 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:00.393 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.393 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.393 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=681776 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 681776 /var/tmp/bdevperf.sock 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 681776 ']' 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 [2024-12-10 04:57:51.439432] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:00.394 [2024-12-10 04:57:51.439482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.394 "trsvcid": "$NVMF_PORT", 00:21:00.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.394 "hdgst": ${hdgst:-false}, 00:21:00.394 "ddgst": ${ddgst:-false} 00:21:00.394 }, 00:21:00.394 "method": "bdev_nvme_attach_controller" 00:21:00.394 } 00:21:00.394 EOF 00:21:00.394 )") 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:00.394 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:00.394 { 00:21:00.394 "params": { 00:21:00.394 "name": "Nvme$subsystem", 00:21:00.394 "trtype": "$TEST_TRANSPORT", 00:21:00.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.394 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "$NVMF_PORT", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.395 "hdgst": ${hdgst:-false}, 00:21:00.395 "ddgst": ${ddgst:-false} 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 } 00:21:00.395 EOF 00:21:00.395 )") 00:21:00.395 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:00.395 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:00.395 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:00.395 04:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme1", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme2", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme3", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme4", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme5", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme6", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme7", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme8", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme9", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 },{ 00:21:00.395 "params": { 00:21:00.395 "name": "Nvme10", 00:21:00.395 "trtype": "tcp", 00:21:00.395 "traddr": "10.0.0.2", 00:21:00.395 "adrfam": "ipv4", 00:21:00.395 "trsvcid": "4420", 00:21:00.395 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:00.395 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:00.395 "hdgst": false, 00:21:00.395 "ddgst": false 00:21:00.395 }, 00:21:00.395 "method": "bdev_nvme_attach_controller" 00:21:00.395 }' 00:21:00.395 [2024-12-10 04:57:51.514678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.654 [2024-12-10 04:57:51.555175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 681776 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:02.559 04:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:03.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 681776 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 681516 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.496 { 00:21:03.496 "params": { 00:21:03.496 "name": "Nvme$subsystem", 00:21:03.496 "trtype": "$TEST_TRANSPORT", 00:21:03.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.496 "adrfam": "ipv4", 00:21:03.496 "trsvcid": "$NVMF_PORT", 00:21:03.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.496 "hdgst": ${hdgst:-false}, 00:21:03.496 "ddgst": ${ddgst:-false} 00:21:03.496 }, 00:21:03.496 "method": "bdev_nvme_attach_controller" 00:21:03.496 } 00:21:03.496 EOF 00:21:03.496 )") 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.496 { 00:21:03.496 "params": { 00:21:03.496 "name": "Nvme$subsystem", 00:21:03.496 "trtype": "$TEST_TRANSPORT", 00:21:03.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.496 "adrfam": "ipv4", 00:21:03.496 "trsvcid": "$NVMF_PORT", 00:21:03.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.496 "hdgst": ${hdgst:-false}, 00:21:03.496 "ddgst": ${ddgst:-false} 00:21:03.496 }, 00:21:03.496 "method": "bdev_nvme_attach_controller" 00:21:03.496 } 00:21:03.496 EOF 00:21:03.496 )") 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.496 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.496 { 00:21:03.496 "params": { 00:21:03.496 "name": "Nvme$subsystem", 00:21:03.496 "trtype": "$TEST_TRANSPORT", 00:21:03.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.496 "adrfam": "ipv4", 00:21:03.496 "trsvcid": "$NVMF_PORT", 00:21:03.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.496 "hdgst": ${hdgst:-false}, 00:21:03.496 "ddgst": ${ddgst:-false} 00:21:03.496 }, 00:21:03.496 "method": "bdev_nvme_attach_controller" 00:21:03.496 } 00:21:03.496 EOF 00:21:03.496 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 [2024-12-10 04:57:54.368227] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:03.497 [2024-12-10 04:57:54.368276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682258 ] 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.497 { 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme$subsystem", 00:21:03.497 "trtype": "$TEST_TRANSPORT", 00:21:03.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "$NVMF_PORT", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.497 "hdgst": ${hdgst:-false}, 00:21:03.497 "ddgst": ${ddgst:-false} 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 } 00:21:03.497 EOF 00:21:03.497 )") 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:03.497 04:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme1", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.497 "hdgst": false, 00:21:03.497 "ddgst": false 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 },{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme2", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:03.497 "hdgst": false, 00:21:03.497 "ddgst": false 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 },{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme3", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:03.497 "hdgst": false, 00:21:03.497 "ddgst": false 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 },{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme4", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:03.497 "hdgst": false, 00:21:03.497 "ddgst": false 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 },{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme5", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:03.497 "hdgst": false, 00:21:03.497 "ddgst": false 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 },{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme6", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:03.497 "hdgst": false, 00:21:03.497 "ddgst": false 00:21:03.497 }, 00:21:03.497 "method": "bdev_nvme_attach_controller" 00:21:03.497 },{ 00:21:03.497 "params": { 00:21:03.497 "name": "Nvme7", 00:21:03.497 "trtype": "tcp", 00:21:03.497 "traddr": "10.0.0.2", 00:21:03.497 "adrfam": "ipv4", 00:21:03.497 "trsvcid": "4420", 00:21:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:03.497 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:03.498 "hdgst": false, 00:21:03.498 "ddgst": false 00:21:03.498 }, 00:21:03.498 "method": "bdev_nvme_attach_controller" 00:21:03.498 },{ 00:21:03.498 "params": { 00:21:03.498 "name": "Nvme8", 00:21:03.498 "trtype": "tcp", 00:21:03.498 "traddr": "10.0.0.2", 00:21:03.498 "adrfam": "ipv4", 00:21:03.498 "trsvcid": "4420", 00:21:03.498 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:03.498 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:03.498 "hdgst": false, 00:21:03.498 "ddgst": false 00:21:03.498 }, 00:21:03.498 "method": "bdev_nvme_attach_controller" 00:21:03.498 },{ 00:21:03.498 "params": { 00:21:03.498 "name": "Nvme9", 00:21:03.498 "trtype": "tcp", 00:21:03.498 "traddr": "10.0.0.2", 00:21:03.498 "adrfam": "ipv4", 00:21:03.498 "trsvcid": "4420", 00:21:03.498 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:03.498 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:03.498 "hdgst": false, 00:21:03.498 "ddgst": false 00:21:03.498 }, 00:21:03.498 "method": "bdev_nvme_attach_controller" 00:21:03.498 },{ 00:21:03.498 "params": { 00:21:03.498 "name": "Nvme10", 00:21:03.498 "trtype": "tcp", 00:21:03.498 "traddr": "10.0.0.2", 00:21:03.498 "adrfam": "ipv4", 00:21:03.498 "trsvcid": "4420", 00:21:03.498 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:03.498 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:03.498 "hdgst": false, 00:21:03.498 "ddgst": false 00:21:03.498 }, 00:21:03.498 "method": "bdev_nvme_attach_controller" 00:21:03.498 }' 00:21:03.498 [2024-12-10 04:57:54.447466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.498 [2024-12-10 04:57:54.487189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.413 Running I/O for 1 seconds... 00:21:06.240 2240.00 IOPS, 140.00 MiB/s 00:21:06.240 Latency(us) 00:21:06.240 [2024-12-10T03:57:57.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.240 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme1n1 : 1.13 282.94 17.68 0.00 0.00 224161.40 15541.39 208716.56 00:21:06.240 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme2n1 : 1.13 282.19 17.64 0.00 0.00 221540.16 17476.27 225693.50 00:21:06.240 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme3n1 : 1.12 286.65 17.92 0.00 0.00 215036.49 14917.24 204721.98 00:21:06.240 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme4n1 : 1.12 285.01 17.81 0.00 0.00 213153.30 14168.26 209715.20 00:21:06.240 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme5n1 : 1.12 232.31 14.52 0.00 0.00 251397.70 10860.25 226692.14 00:21:06.240 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme6n1 : 1.15 279.09 17.44 0.00 0.00 211860.92 17850.76 227690.79 00:21:06.240 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme7n1 : 1.14 280.10 17.51 0.00 0.00 207907.11 13856.18 230686.72 00:21:06.240 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme8n1 : 1.14 281.30 17.58 0.00 0.00 203831.20 26838.55 202724.69 00:21:06.240 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme9n1 : 1.15 278.45 17.40 0.00 0.00 203095.87 16602.45 226692.14 00:21:06.240 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.240 Verification LBA range: start 0x0 length 0x400 00:21:06.240 Nvme10n1 : 1.15 281.10 17.57 0.00 0.00 198205.74 518.83 233682.65 00:21:06.240 [2024-12-10T03:57:57.377Z] =================================================================================================================== 00:21:06.240 [2024-12-10T03:57:57.377Z] Total : 2769.15 173.07 0.00 0.00 214315.14 518.83 233682.65 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.499 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.500 rmmod nvme_tcp 00:21:06.500 rmmod nvme_fabrics 00:21:06.500 rmmod nvme_keyring 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 681516 ']' 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 681516 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 681516 ']' 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 681516 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 681516 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 681516' 00:21:06.500 killing process with pid 681516 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 681516 00:21:06.500 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 681516 00:21:06.758 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.758 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.758 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.758 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:06.759 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:06.759 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.759 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.759 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.759 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.018 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.018 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.018 04:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.924 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.924 00:21:08.924 real 0m15.393s 00:21:08.924 user 0m34.714s 00:21:08.924 sys 0m5.788s 00:21:08.925 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.925 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:08.925 ************************************ 00:21:08.925 END TEST nvmf_shutdown_tc1 00:21:08.925 ************************************ 00:21:08.925 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:08.925 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:08.925 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.925 04:57:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:08.925 ************************************ 00:21:08.925 START TEST nvmf_shutdown_tc2 00:21:08.925 ************************************ 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:08.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.925 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:09.185 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:09.185 Found net devices under 0000:af:00.0: cvl_0_0 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:09.185 Found net devices under 0000:af:00.1: cvl_0_1 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:21:09.185 00:21:09.185 --- 10.0.0.2 ping statistics --- 00:21:09.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.185 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:21:09.185 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:09.185 00:21:09.185 --- 10.0.0.1 ping statistics --- 00:21:09.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.185 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.444 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=683271 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 683271 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 683271 ']' 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.445 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.445 [2024-12-10 04:58:00.415815] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:09.445 [2024-12-10 04:58:00.415858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.445 [2024-12-10 04:58:00.496562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.445 [2024-12-10 04:58:00.538298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.445 [2024-12-10 04:58:00.538334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.445 [2024-12-10 04:58:00.538342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.445 [2024-12-10 04:58:00.538348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.445 [2024-12-10 04:58:00.538353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.445 [2024-12-10 04:58:00.539868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.445 [2024-12-10 04:58:00.539974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.445 [2024-12-10 04:58:00.540009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.445 [2024-12-10 04:58:00.540010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.703 [2024-12-10 04:58:00.685367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:09.703 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.704 04:58:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.704 Malloc1 00:21:09.704 [2024-12-10 04:58:00.800929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.704 Malloc2 00:21:09.962 Malloc3 00:21:09.962 Malloc4 00:21:09.962 Malloc5 00:21:09.962 Malloc6 00:21:09.962 Malloc7 00:21:09.962 Malloc8 00:21:10.222 Malloc9 00:21:10.222 Malloc10 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=683535 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 683535 /var/tmp/bdevperf.sock 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 683535 ']' 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.222 { 00:21:10.222 "params": { 00:21:10.222 "name": "Nvme$subsystem", 00:21:10.222 "trtype": "$TEST_TRANSPORT", 00:21:10.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.222 "adrfam": "ipv4", 00:21:10.222 "trsvcid": "$NVMF_PORT", 00:21:10.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.222 "hdgst": ${hdgst:-false}, 00:21:10.222 "ddgst": ${ddgst:-false} 00:21:10.222 }, 00:21:10.222 "method": "bdev_nvme_attach_controller" 00:21:10.222 } 00:21:10.222 EOF 00:21:10.222 )") 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.222 { 00:21:10.222 "params": { 00:21:10.222 "name": "Nvme$subsystem", 00:21:10.222 "trtype": "$TEST_TRANSPORT", 00:21:10.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.222 "adrfam": "ipv4", 00:21:10.222 "trsvcid": "$NVMF_PORT", 00:21:10.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.222 "hdgst": ${hdgst:-false}, 00:21:10.222 "ddgst": ${ddgst:-false} 00:21:10.222 }, 00:21:10.222 "method": "bdev_nvme_attach_controller" 00:21:10.222 } 00:21:10.222 EOF 00:21:10.222 )") 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.222 { 00:21:10.222 "params": { 00:21:10.222 "name": "Nvme$subsystem", 00:21:10.222 "trtype": "$TEST_TRANSPORT", 00:21:10.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.222 "adrfam": "ipv4", 00:21:10.222 "trsvcid": "$NVMF_PORT", 00:21:10.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.222 "hdgst": ${hdgst:-false}, 00:21:10.222 "ddgst": ${ddgst:-false} 00:21:10.222 }, 00:21:10.222 "method": "bdev_nvme_attach_controller" 00:21:10.222 } 00:21:10.222 EOF 00:21:10.222 )") 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.222 { 00:21:10.222 "params": { 00:21:10.222 "name": "Nvme$subsystem", 00:21:10.222 "trtype": "$TEST_TRANSPORT", 00:21:10.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.222 "adrfam": "ipv4", 00:21:10.222 "trsvcid": "$NVMF_PORT", 00:21:10.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.222 "hdgst": ${hdgst:-false}, 00:21:10.222 "ddgst": ${ddgst:-false} 00:21:10.222 }, 00:21:10.222 "method": "bdev_nvme_attach_controller" 00:21:10.222 } 00:21:10.222 EOF 00:21:10.222 )") 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.222 { 00:21:10.222 "params": { 00:21:10.222 "name": "Nvme$subsystem", 00:21:10.222 "trtype": "$TEST_TRANSPORT", 00:21:10.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.222 "adrfam": "ipv4", 00:21:10.222 "trsvcid": "$NVMF_PORT", 00:21:10.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.222 "hdgst": ${hdgst:-false}, 00:21:10.222 "ddgst": ${ddgst:-false} 00:21:10.222 }, 00:21:10.222 "method": "bdev_nvme_attach_controller" 00:21:10.222 } 00:21:10.222 EOF 00:21:10.222 )") 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.222 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.222 { 00:21:10.222 "params": { 00:21:10.222 "name": "Nvme$subsystem", 00:21:10.222 "trtype": "$TEST_TRANSPORT", 00:21:10.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.222 "adrfam": "ipv4", 00:21:10.222 "trsvcid": "$NVMF_PORT", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.223 "hdgst": ${hdgst:-false}, 00:21:10.223 "ddgst": ${ddgst:-false} 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 } 00:21:10.223 EOF 00:21:10.223 )") 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.223 { 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme$subsystem", 00:21:10.223 "trtype": "$TEST_TRANSPORT", 00:21:10.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "$NVMF_PORT", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.223 "hdgst": ${hdgst:-false}, 00:21:10.223 "ddgst": ${ddgst:-false} 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 } 00:21:10.223 EOF 00:21:10.223 )") 00:21:10.223 [2024-12-10 04:58:01.280357] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:10.223 [2024-12-10 04:58:01.280404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683535 ] 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.223 { 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme$subsystem", 00:21:10.223 "trtype": "$TEST_TRANSPORT", 00:21:10.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "$NVMF_PORT", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.223 "hdgst": ${hdgst:-false}, 00:21:10.223 "ddgst": ${ddgst:-false} 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 } 00:21:10.223 EOF 00:21:10.223 )") 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.223 { 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme$subsystem", 00:21:10.223 "trtype": "$TEST_TRANSPORT", 00:21:10.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "$NVMF_PORT", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.223 "hdgst": ${hdgst:-false}, 00:21:10.223 "ddgst": ${ddgst:-false} 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 } 00:21:10.223 EOF 00:21:10.223 )") 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.223 { 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme$subsystem", 00:21:10.223 "trtype": "$TEST_TRANSPORT", 00:21:10.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "$NVMF_PORT", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.223 "hdgst": ${hdgst:-false}, 00:21:10.223 "ddgst": ${ddgst:-false} 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 } 00:21:10.223 EOF 00:21:10.223 )") 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:10.223 04:58:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme1", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme2", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme3", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme4", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme5", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme6", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme7", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme8", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme9", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 },{ 00:21:10.223 "params": { 00:21:10.223 "name": "Nvme10", 00:21:10.223 "trtype": "tcp", 00:21:10.223 "traddr": "10.0.0.2", 00:21:10.223 "adrfam": "ipv4", 00:21:10.223 "trsvcid": "4420", 00:21:10.223 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:10.223 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:10.223 "hdgst": false, 00:21:10.223 "ddgst": false 00:21:10.223 }, 00:21:10.223 "method": "bdev_nvme_attach_controller" 00:21:10.223 }' 00:21:10.483 [2024-12-10 04:58:01.358032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.483 [2024-12-10 04:58:01.397994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.858 Running I/O for 10 seconds... 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:12.118 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:12.377 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:12.377 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:12.377 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:12.377 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:12.377 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.377 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 683535 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 683535 ']' 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 683535 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683535 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683535' 00:21:12.636 killing process with pid 683535 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 683535 00:21:12.636 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 683535 00:21:12.636 Received shutdown signal, test time was about 0.797897 seconds 00:21:12.636 00:21:12.636 Latency(us) 00:21:12.636 [2024-12-10T03:58:03.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme1n1 : 0.76 252.53 15.78 0.00 0.00 250184.01 16103.13 214708.42 00:21:12.636 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme2n1 : 0.77 254.69 15.92 0.00 0.00 242104.65 2418.59 196732.83 00:21:12.636 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme3n1 : 0.79 330.06 20.63 0.00 0.00 183405.14 4649.94 206719.27 00:21:12.636 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme4n1 : 0.79 323.65 20.23 0.00 0.00 183623.68 17101.78 213709.78 00:21:12.636 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme5n1 : 0.78 247.42 15.46 0.00 0.00 234804.83 17975.59 217704.35 00:21:12.636 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme6n1 : 0.79 322.26 20.14 0.00 0.00 176748.86 15478.98 193736.90 00:21:12.636 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme7n1 : 0.80 321.10 20.07 0.00 0.00 173622.61 14293.09 214708.42 00:21:12.636 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme8n1 : 0.77 248.52 15.53 0.00 0.00 218248.70 12982.37 208716.56 00:21:12.636 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme9n1 : 0.78 245.69 15.36 0.00 0.00 216236.29 18599.74 217704.35 00:21:12.636 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.636 Verification LBA range: start 0x0 length 0x400 00:21:12.636 Nvme10n1 : 0.78 244.65 15.29 0.00 0.00 212291.78 18599.74 234681.30 00:21:12.636 [2024-12-10T03:58:03.773Z] =================================================================================================================== 00:21:12.636 [2024-12-10T03:58:03.773Z] Total : 2790.58 174.41 0.00 0.00 205650.00 2418.59 234681.30 00:21:12.895 04:58:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 683271 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.831 rmmod nvme_tcp 00:21:13.831 rmmod nvme_fabrics 00:21:13.831 rmmod nvme_keyring 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 683271 ']' 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 683271 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 683271 ']' 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 683271 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.831 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683271 00:21:14.090 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.090 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.090 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683271' 00:21:14.090 killing process with pid 683271 00:21:14.090 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 683271 00:21:14.090 04:58:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 683271 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.349 04:58:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.886 00:21:16.886 real 0m7.379s 00:21:16.886 user 0m21.637s 00:21:16.886 sys 0m1.315s 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:16.886 ************************************ 00:21:16.886 END TEST nvmf_shutdown_tc2 00:21:16.886 ************************************ 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:16.886 ************************************ 00:21:16.886 START TEST nvmf_shutdown_tc3 00:21:16.886 ************************************ 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.886 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:16.886 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:16.887 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:16.887 Found net devices under 0000:af:00.0: cvl_0_0 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:16.887 Found net devices under 0000:af:00.1: cvl_0_1 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:21:16.887 00:21:16.887 --- 10.0.0.2 ping statistics --- 00:21:16.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.887 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:16.887 00:21:16.887 --- 10.0.0.1 ping statistics --- 00:21:16.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.887 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=684597 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 684597 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 684597 ']' 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.887 04:58:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.887 [2024-12-10 04:58:07.974903] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:16.887 [2024-12-10 04:58:07.974950] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.147 [2024-12-10 04:58:08.055338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.147 [2024-12-10 04:58:08.096225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.147 [2024-12-10 04:58:08.096263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.147 [2024-12-10 04:58:08.096269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.147 [2024-12-10 04:58:08.096275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.147 [2024-12-10 04:58:08.096284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.147 [2024-12-10 04:58:08.097660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.147 [2024-12-10 04:58:08.097679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.147 [2024-12-10 04:58:08.097774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.147 [2024-12-10 04:58:08.097775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.147 [2024-12-10 04:58:08.246191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.147 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.406 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.406 Malloc1 00:21:17.406 [2024-12-10 04:58:08.355053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.406 Malloc2 00:21:17.406 Malloc3 00:21:17.406 Malloc4 00:21:17.406 Malloc5 00:21:17.665 Malloc6 00:21:17.665 Malloc7 00:21:17.665 Malloc8 00:21:17.665 Malloc9 00:21:17.665 Malloc10 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=684829 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 684829 /var/tmp/bdevperf.sock 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 684829 ']' 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.665 { 00:21:17.665 "params": { 00:21:17.665 "name": "Nvme$subsystem", 00:21:17.665 "trtype": "$TEST_TRANSPORT", 00:21:17.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.665 "adrfam": "ipv4", 00:21:17.665 "trsvcid": "$NVMF_PORT", 00:21:17.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.665 "hdgst": ${hdgst:-false}, 00:21:17.665 "ddgst": ${ddgst:-false} 00:21:17.665 }, 00:21:17.665 "method": "bdev_nvme_attach_controller" 00:21:17.665 } 00:21:17.665 EOF 00:21:17.665 )") 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.665 { 00:21:17.665 "params": { 00:21:17.665 "name": "Nvme$subsystem", 00:21:17.665 "trtype": "$TEST_TRANSPORT", 00:21:17.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.665 "adrfam": "ipv4", 00:21:17.665 "trsvcid": "$NVMF_PORT", 00:21:17.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.665 "hdgst": ${hdgst:-false}, 00:21:17.665 "ddgst": ${ddgst:-false} 00:21:17.665 }, 00:21:17.665 "method": "bdev_nvme_attach_controller" 00:21:17.665 } 00:21:17.665 EOF 00:21:17.665 )") 00:21:17.665 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.925 { 00:21:17.925 "params": { 00:21:17.925 "name": "Nvme$subsystem", 00:21:17.925 "trtype": "$TEST_TRANSPORT", 00:21:17.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.925 "adrfam": "ipv4", 00:21:17.925 "trsvcid": "$NVMF_PORT", 00:21:17.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.925 "hdgst": ${hdgst:-false}, 00:21:17.925 "ddgst": ${ddgst:-false} 00:21:17.925 }, 00:21:17.925 "method": "bdev_nvme_attach_controller" 00:21:17.925 } 00:21:17.925 EOF 00:21:17.925 )") 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.925 { 00:21:17.925 "params": { 00:21:17.925 "name": "Nvme$subsystem", 00:21:17.925 "trtype": "$TEST_TRANSPORT", 00:21:17.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.925 "adrfam": "ipv4", 00:21:17.925 "trsvcid": "$NVMF_PORT", 00:21:17.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.925 "hdgst": ${hdgst:-false}, 00:21:17.925 "ddgst": ${ddgst:-false} 00:21:17.925 }, 00:21:17.925 "method": "bdev_nvme_attach_controller" 00:21:17.925 } 00:21:17.925 EOF 00:21:17.925 )") 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.925 { 00:21:17.925 "params": { 00:21:17.925 "name": "Nvme$subsystem", 00:21:17.925 "trtype": "$TEST_TRANSPORT", 00:21:17.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.925 "adrfam": "ipv4", 00:21:17.925 "trsvcid": "$NVMF_PORT", 00:21:17.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.925 "hdgst": ${hdgst:-false}, 00:21:17.925 "ddgst": ${ddgst:-false} 00:21:17.925 }, 00:21:17.925 "method": "bdev_nvme_attach_controller" 00:21:17.925 } 00:21:17.925 EOF 00:21:17.925 )") 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.925 { 00:21:17.925 "params": { 00:21:17.925 "name": "Nvme$subsystem", 00:21:17.925 "trtype": "$TEST_TRANSPORT", 00:21:17.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.925 "adrfam": "ipv4", 00:21:17.925 "trsvcid": "$NVMF_PORT", 00:21:17.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.925 "hdgst": ${hdgst:-false}, 00:21:17.925 "ddgst": ${ddgst:-false} 00:21:17.925 }, 00:21:17.925 "method": "bdev_nvme_attach_controller" 00:21:17.925 } 00:21:17.925 EOF 00:21:17.925 )") 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.925 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.925 { 00:21:17.925 "params": { 00:21:17.925 "name": "Nvme$subsystem", 00:21:17.925 "trtype": "$TEST_TRANSPORT", 00:21:17.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.925 "adrfam": "ipv4", 00:21:17.925 "trsvcid": "$NVMF_PORT", 00:21:17.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.925 "hdgst": ${hdgst:-false}, 00:21:17.925 "ddgst": ${ddgst:-false} 00:21:17.925 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 } 00:21:17.926 EOF 00:21:17.926 )") 00:21:17.926 [2024-12-10 04:58:08.829926] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:17.926 [2024-12-10 04:58:08.829977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684829 ] 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.926 { 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme$subsystem", 00:21:17.926 "trtype": "$TEST_TRANSPORT", 00:21:17.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "$NVMF_PORT", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.926 "hdgst": ${hdgst:-false}, 00:21:17.926 "ddgst": ${ddgst:-false} 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 } 00:21:17.926 EOF 00:21:17.926 )") 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.926 { 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme$subsystem", 00:21:17.926 "trtype": "$TEST_TRANSPORT", 00:21:17.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "$NVMF_PORT", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.926 "hdgst": ${hdgst:-false}, 00:21:17.926 "ddgst": ${ddgst:-false} 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 } 00:21:17.926 EOF 00:21:17.926 )") 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.926 { 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme$subsystem", 00:21:17.926 "trtype": "$TEST_TRANSPORT", 00:21:17.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "$NVMF_PORT", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.926 "hdgst": ${hdgst:-false}, 00:21:17.926 "ddgst": ${ddgst:-false} 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 } 00:21:17.926 EOF 00:21:17.926 )") 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:17.926 04:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme1", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme2", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme3", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme4", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme5", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme6", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme7", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme8", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme9", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 },{ 00:21:17.926 "params": { 00:21:17.926 "name": "Nvme10", 00:21:17.926 "trtype": "tcp", 00:21:17.926 "traddr": "10.0.0.2", 00:21:17.926 "adrfam": "ipv4", 00:21:17.926 "trsvcid": "4420", 00:21:17.926 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:17.926 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:17.926 "hdgst": false, 00:21:17.926 "ddgst": false 00:21:17.926 }, 00:21:17.926 "method": "bdev_nvme_attach_controller" 00:21:17.926 }' 00:21:17.926 [2024-12-10 04:58:08.904406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.926 [2024-12-10 04:58:08.943881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.304 Running I/O for 10 seconds... 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:19.872 04:58:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 684597 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 684597 ']' 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 684597 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684597 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684597' 00:21:20.148 killing process with pid 684597 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 684597 00:21:20.148 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 684597 00:21:20.149 [2024-12-10 04:58:11.158992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.149 [2024-12-10 04:58:11.159423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.159485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52b0c0 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.193 [2024-12-10 04:58:11.160809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.160995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.161123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ef60 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.194 [2024-12-10 04:58:11.163646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.163848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ba80 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.164996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.195 [2024-12-10 04:58:11.165040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52bf70 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.165996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c440 is same with the state(6) to be set 00:21:20.196 [2024-12-10 04:58:11.166932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.166999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.167341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52c910 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.168257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cde0 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.168272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cde0 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.168278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cde0 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.168285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cde0 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.168294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cde0 is same with the state(6) to be set 00:21:20.197 [2024-12-10 04:58:11.168802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa41170 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.168924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.168975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.168981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fa610 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa52b60 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9250 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4f80 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4570 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e5410 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9450 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with t[2024-12-10 04:58:11.169496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:21:20.198 id:0 cdw10:00000000 cdw11:00000000 00:21:20.198 [2024-12-10 04:58:11.169506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.198 [2024-12-10 04:58:11.169507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.198 [2024-12-10 04:58:11.169515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.199 [2024-12-10 04:58:11.169518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.169526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.199 [2024-12-10 04:58:11.169533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.169541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.199 [2024-12-10 04:58:11.169548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.169555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa113d0 is same w[2024-12-10 04:58:11.169562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with tith the state(6) to be set 00:21:20.199 he state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.169755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d2d0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-12-10 04:58:11.170217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 he state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t[2024-12-10 04:58:11.170229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:20.199 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t[2024-12-10 04:58:11.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12he state(6) to be set 00:21:20.199 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t[2024-12-10 04:58:11.170250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:20.199 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-12-10 04:58:11.170295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 he state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:58:11.170307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 he state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.199 [2024-12-10 04:58:11.170323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.199 [2024-12-10 04:58:11.170330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.199 [2024-12-10 04:58:11.170336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t[2024-12-10 04:58:11.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1he state(6) to be set 00:21:20.200 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1[2024-12-10 04:58:11.170427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 he state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t[2024-12-10 04:58:11.170437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:20.200 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:58:11.170475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 he state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 [2024-12-10 04:58:11.170622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-12-10 04:58:11.170630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 he state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 04:58:11.170640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.200 he state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.200 [2024-12-10 04:58:11.170655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.200 [2024-12-10 04:58:11.170658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.201 [2024-12-10 04:58:11.170667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.201 [2024-12-10 04:58:11.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.201 [2024-12-10 04:58:11.170684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d7a0 is same with the state(6) to be set 00:21:20.201 [2024-12-10 04:58:11.170691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.170991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.170997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.201 [2024-12-10 04:58:11.171176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.201 [2024-12-10 04:58:11.171184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.202 [2024-12-10 04:58:11.171900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.171985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.171993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.202 [2024-12-10 04:58:11.172373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.202 [2024-12-10 04:58:11.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.172387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.172396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.172410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.172416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.172425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.172431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.203 [2024-12-10 04:58:11.183722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.183754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:20.203 [2024-12-10 04:58:11.183997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41170 (9): Bad file descriptor 00:21:20.203 [2024-12-10 04:58:11.184043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.203 [2024-12-10 04:58:11.184057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.184067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.203 [2024-12-10 04:58:11.184076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.184086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.203 [2024-12-10 04:58:11.184096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.203 [2024-12-10 04:58:11.184106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.204 [2024-12-10 04:58:11.184114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.184124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa39300 is same with the state(6) to be set 00:21:20.204 [2024-12-10 04:58:11.184144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4fa610 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa52b60 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9250 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e4f80 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e4570 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e5410 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9450 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.184288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa113d0 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.187086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:20.204 [2024-12-10 04:58:11.187592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:20.204 [2024-12-10 04:58:11.187821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.204 [2024-12-10 04:58:11.187850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9450 with addr=10.0.0.2, port=4420 00:21:20.204 [2024-12-10 04:58:11.187863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9450 is same with the state(6) to be set 00:21:20.204 [2024-12-10 04:58:11.189074] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:20.204 [2024-12-10 04:58:11.189283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.204 [2024-12-10 04:58:11.189304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa52b60 with addr=10.0.0.2, port=4420 00:21:20.204 [2024-12-10 04:58:11.189314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa52b60 is same with the state(6) to be set 00:21:20.204 [2024-12-10 04:58:11.189328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9450 (9): Bad file descriptor 00:21:20.204 [2024-12-10 04:58:11.189396] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:20.204 [2024-12-10 04:58:11.189467] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:20.204 [2024-12-10 04:58:11.189531] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:20.204 [2024-12-10 04:58:11.189583] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:20.204 [2024-12-10 04:58:11.189625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.189980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.189988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.204 [2024-12-10 04:58:11.190202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.204 [2024-12-10 04:58:11.190212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.190880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.190889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.191018] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:20.205 [2024-12-10 04:58:11.191061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa52b60 (9): Bad file descriptor 00:21:20.205 [2024-12-10 04:58:11.191077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:20.205 [2024-12-10 04:58:11.191086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:20.205 [2024-12-10 04:58:11.191096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:20.205 [2024-12-10 04:58:11.191106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:20.205 [2024-12-10 04:58:11.191179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.191193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.205 [2024-12-10 04:58:11.191209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.205 [2024-12-10 04:58:11.191219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.206 [2024-12-10 04:58:11.191898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.206 [2024-12-10 04:58:11.191910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.191918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.191939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.191951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.191960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.191972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.191981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.191992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.192501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.192511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9c00 is same with the state(6) to be set 00:21:20.207 [2024-12-10 04:58:11.193677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:20.207 [2024-12-10 04:58:11.193711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:20.207 [2024-12-10 04:58:11.193720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:20.207 [2024-12-10 04:58:11.193730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:20.207 [2024-12-10 04:58:11.193738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:20.207 [2024-12-10 04:58:11.194746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:20.207 [2024-12-10 04:58:11.195015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.207 [2024-12-10 04:58:11.195032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4fa610 with addr=10.0.0.2, port=4420 00:21:20.207 [2024-12-10 04:58:11.195041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fa610 is same with the state(6) to be set 00:21:20.207 [2024-12-10 04:58:11.195062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa39300 (9): Bad file descriptor 00:21:20.207 [2024-12-10 04:58:11.195551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.207 [2024-12-10 04:58:11.195569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e4f80 with addr=10.0.0.2, port=4420 00:21:20.207 [2024-12-10 04:58:11.195578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4f80 is same with the state(6) to be set 00:21:20.207 [2024-12-10 04:58:11.195588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4fa610 (9): Bad file descriptor 00:21:20.207 [2024-12-10 04:58:11.195638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.195649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.207 [2024-12-10 04:58:11.195661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.207 [2024-12-10 04:58:11.195668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.195990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.195997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.208 [2024-12-10 04:58:11.196278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.208 [2024-12-10 04:58:11.196286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.196699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.196707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9360 is same with the state(6) to be set 00:21:20.209 [2024-12-10 04:58:11.197748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.209 [2024-12-10 04:58:11.197953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.209 [2024-12-10 04:58:11.197961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.197969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.197977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.197986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.197994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.210 [2024-12-10 04:58:11.198594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.210 [2024-12-10 04:58:11.198603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.198841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.198849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bca60 is same with the state(6) to be set 00:21:20.211 [2024-12-10 04:58:11.200127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.211 [2024-12-10 04:58:11.200440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.211 [2024-12-10 04:58:11.200449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.200999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.201006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.201015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.201022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.201030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.212 [2024-12-10 04:58:11.201046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.212 [2024-12-10 04:58:11.201054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.201212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.201220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9eaf10 is same with the state(6) to be set 00:21:20.213 [2024-12-10 04:58:11.202263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.213 [2024-12-10 04:58:11.202727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.213 [2024-12-10 04:58:11.202735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.202990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.202999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.214 [2024-12-10 04:58:11.203286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.214 [2024-12-10 04:58:11.203295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.203302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.203311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.203318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.203326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.203333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.203342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.203349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.203356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec220 is same with the state(6) to be set 00:21:20.215 [2024-12-10 04:58:11.204357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.215 [2024-12-10 04:58:11.204885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.215 [2024-12-10 04:58:11.204893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.204909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.204925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.204941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.204956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.204972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.204988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.204996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.216 [2024-12-10 04:58:11.205399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.216 [2024-12-10 04:58:11.205406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82a400 is same with the state(6) to be set 00:21:20.216 [2024-12-10 04:58:11.206359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:20.216 [2024-12-10 04:58:11.206378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:20.216 [2024-12-10 04:58:11.206391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:20.216 [2024-12-10 04:58:11.206402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:20.216 [2024-12-10 04:58:11.206436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e4f80 (9): Bad file descriptor 00:21:20.216 [2024-12-10 04:58:11.206446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:20.216 [2024-12-10 04:58:11.206454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:20.216 [2024-12-10 04:58:11.206462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:20.216 [2024-12-10 04:58:11.206474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:20.216 [2024-12-10 04:58:11.206515] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:20.216 [2024-12-10 04:58:11.206537] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:20.216 [2024-12-10 04:58:11.206610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:20.216 [2024-12-10 04:58:11.206876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.217 [2024-12-10 04:58:11.206893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e5410 with addr=10.0.0.2, port=4420 00:21:20.217 [2024-12-10 04:58:11.206902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e5410 is same with the state(6) to be set 00:21:20.217 [2024-12-10 04:58:11.207049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.217 [2024-12-10 04:58:11.207063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e4570 with addr=10.0.0.2, port=4420 00:21:20.217 [2024-12-10 04:58:11.207071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4570 is same with the state(6) to be set 00:21:20.217 [2024-12-10 04:58:11.207287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.217 [2024-12-10 04:58:11.207300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9250 with addr=10.0.0.2, port=4420 00:21:20.217 [2024-12-10 04:58:11.207308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9250 is same with the state(6) to be set 00:21:20.217 [2024-12-10 04:58:11.207436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.217 [2024-12-10 04:58:11.207448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa113d0 with addr=10.0.0.2, port=4420 00:21:20.217 [2024-12-10 04:58:11.207457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa113d0 is same with the state(6) to be set 00:21:20.217 [2024-12-10 04:58:11.207465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:20.217 [2024-12-10 04:58:11.207472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:20.217 [2024-12-10 04:58:11.207480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:20.217 [2024-12-10 04:58:11.207487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:20.217 [2024-12-10 04:58:11.208412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.217 [2024-12-10 04:58:11.208892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.217 [2024-12-10 04:58:11.208900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.208915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.208931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.208945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.208960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.208975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.208990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.208997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.218 [2024-12-10 04:58:11.209424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.218 [2024-12-10 04:58:11.209432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829170 is same with the state(6) to be set 00:21:20.218 [2024-12-10 04:58:11.210618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:20.218 [2024-12-10 04:58:11.210636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:20.218 [2024-12-10 04:58:11.210644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:20.218 task offset: 24576 on job bdev=Nvme2n1 fails 00:21:20.218 00:21:20.218 Latency(us) 00:21:20.218 [2024-12-10T03:58:11.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.218 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.218 Job: Nvme1n1 ended in about 0.85 seconds with error 00:21:20.218 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme1n1 : 0.85 227.12 14.19 75.71 0.00 209109.58 16477.62 212711.13 00:21:20.219 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme2n1 ended in about 0.83 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme2n1 : 0.83 230.47 14.40 76.82 0.00 202235.61 17101.78 211712.49 00:21:20.219 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme3n1 ended in about 0.85 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme3n1 : 0.85 232.45 14.53 75.52 0.00 198113.14 14417.92 211712.49 00:21:20.219 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme4n1 ended in about 0.84 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme4n1 : 0.84 227.91 14.24 75.97 0.00 196843.28 13544.11 216705.71 00:21:20.219 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme5n1 ended in about 0.85 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme5n1 : 0.85 150.61 9.41 75.31 0.00 259866.17 20597.03 224694.86 00:21:20.219 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme6n1 ended in about 0.85 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme6n1 : 0.85 225.36 14.08 75.12 0.00 191531.64 15978.30 212711.13 00:21:20.219 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme7n1 ended in about 0.84 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme7n1 : 0.84 232.97 14.56 71.32 0.00 184900.14 4181.82 211712.49 00:21:20.219 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme8n1 ended in about 0.83 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme8n1 : 0.83 230.07 14.38 76.69 0.00 179427.72 17351.44 219701.64 00:21:20.219 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme9n1 ended in about 0.86 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme9n1 : 0.86 149.18 9.32 74.59 0.00 242015.74 17725.93 239674.51 00:21:20.219 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:20.219 Job: Nvme10n1 ended in about 0.85 seconds with error 00:21:20.219 Verification LBA range: start 0x0 length 0x400 00:21:20.219 Nvme10n1 : 0.85 149.88 9.37 74.94 0.00 235540.81 17601.10 229688.08 00:21:20.219 [2024-12-10T03:58:11.356Z] =================================================================================================================== 00:21:20.219 [2024-12-10T03:58:11.356Z] Total : 2056.01 128.50 751.98 0.00 207032.86 4181.82 239674.51 00:21:20.219 [2024-12-10 04:58:11.242125] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:20.219 [2024-12-10 04:58:11.242182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:20.219 [2024-12-10 04:58:11.242512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.219 [2024-12-10 04:58:11.242531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa41170 with addr=10.0.0.2, port=4420 00:21:20.219 [2024-12-10 04:58:11.242541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa41170 is same with the state(6) to be set 00:21:20.219 [2024-12-10 04:58:11.242556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e5410 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.242566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e4570 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.242575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9250 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.242590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa113d0 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.242952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.219 [2024-12-10 04:58:11.242969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9450 with addr=10.0.0.2, port=4420 00:21:20.219 [2024-12-10 04:58:11.242977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9450 is same with the state(6) to be set 00:21:20.219 [2024-12-10 04:58:11.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.219 [2024-12-10 04:58:11.243208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa52b60 with addr=10.0.0.2, port=4420 00:21:20.219 [2024-12-10 04:58:11.243216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa52b60 is same with the state(6) to be set 00:21:20.219 [2024-12-10 04:58:11.243309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.219 [2024-12-10 04:58:11.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4fa610 with addr=10.0.0.2, port=4420 00:21:20.219 [2024-12-10 04:58:11.243328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fa610 is same with the state(6) to be set 00:21:20.219 [2024-12-10 04:58:11.243496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.219 [2024-12-10 04:58:11.243506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa39300 with addr=10.0.0.2, port=4420 00:21:20.219 [2024-12-10 04:58:11.243514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa39300 is same with the state(6) to be set 00:21:20.219 [2024-12-10 04:58:11.243523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41170 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.243532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:20.219 [2024-12-10 04:58:11.243539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:20.219 [2024-12-10 04:58:11.243547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:20.219 [2024-12-10 04:58:11.243555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:20.219 [2024-12-10 04:58:11.243563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:20.219 [2024-12-10 04:58:11.243569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:20.219 [2024-12-10 04:58:11.243577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:20.219 [2024-12-10 04:58:11.243584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:20.219 [2024-12-10 04:58:11.243591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:20.219 [2024-12-10 04:58:11.243597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:20.219 [2024-12-10 04:58:11.243603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:20.219 [2024-12-10 04:58:11.243609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:20.219 [2024-12-10 04:58:11.243616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:20.219 [2024-12-10 04:58:11.243622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:20.219 [2024-12-10 04:58:11.243629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:20.219 [2024-12-10 04:58:11.243638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:20.219 [2024-12-10 04:58:11.243688] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:20.219 [2024-12-10 04:58:11.244013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9450 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.244027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa52b60 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.244036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4fa610 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.244044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa39300 (9): Bad file descriptor 00:21:20.219 [2024-12-10 04:58:11.244052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:20.219 [2024-12-10 04:58:11.244057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:20.219 [2024-12-10 04:58:11.244064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:20.219 [2024-12-10 04:58:11.244071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:20.219 [2024-12-10 04:58:11.244105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:20.220 [2024-12-10 04:58:11.244115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:20.220 [2024-12-10 04:58:11.244124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:20.220 [2024-12-10 04:58:11.244132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:20.220 [2024-12-10 04:58:11.244141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:20.220 [2024-12-10 04:58:11.244176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.244183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.244190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.244197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.244204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.244211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.244217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.244223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.244229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.244236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.244242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.244251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.244258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.244264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.244273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.244279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.244531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.220 [2024-12-10 04:58:11.244544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e4f80 with addr=10.0.0.2, port=4420 00:21:20.220 [2024-12-10 04:58:11.244551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4f80 is same with the state(6) to be set 00:21:20.220 [2024-12-10 04:58:11.244689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.220 [2024-12-10 04:58:11.244699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa113d0 with addr=10.0.0.2, port=4420 00:21:20.220 [2024-12-10 04:58:11.244706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa113d0 is same with the state(6) to be set 00:21:20.220 [2024-12-10 04:58:11.244936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.220 [2024-12-10 04:58:11.244946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9250 with addr=10.0.0.2, port=4420 00:21:20.220 [2024-12-10 04:58:11.244953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9250 is same with the state(6) to be set 00:21:20.220 [2024-12-10 04:58:11.245163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.220 [2024-12-10 04:58:11.245177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e4570 with addr=10.0.0.2, port=4420 00:21:20.220 [2024-12-10 04:58:11.245184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4570 is same with the state(6) to be set 00:21:20.220 [2024-12-10 04:58:11.245443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.220 [2024-12-10 04:58:11.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e5410 with addr=10.0.0.2, port=4420 00:21:20.220 [2024-12-10 04:58:11.245461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e5410 is same with the state(6) to be set 00:21:20.220 [2024-12-10 04:58:11.245487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e4f80 (9): Bad file descriptor 00:21:20.220 [2024-12-10 04:58:11.245497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa113d0 (9): Bad file descriptor 00:21:20.220 [2024-12-10 04:58:11.245506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9250 (9): Bad file descriptor 00:21:20.220 [2024-12-10 04:58:11.245514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e4570 (9): Bad file descriptor 00:21:20.220 [2024-12-10 04:58:11.245523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e5410 (9): Bad file descriptor 00:21:20.220 [2024-12-10 04:58:11.245548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.245556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.245563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.245569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.245576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.245582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.245588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.245597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.245604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.245611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.245618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.245624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.245630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.245636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.245642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.245648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:20.220 [2024-12-10 04:58:11.245654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:20.220 [2024-12-10 04:58:11.245660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:20.220 [2024-12-10 04:58:11.245667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:20.220 [2024-12-10 04:58:11.245673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:20.480 04:58:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:21.860 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 684829 00:21:21.860 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:21.860 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 684829 00:21:21.860 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 684829 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.861 rmmod nvme_tcp 00:21:21.861 rmmod nvme_fabrics 00:21:21.861 rmmod nvme_keyring 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 684597 ']' 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 684597 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 684597 ']' 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 684597 00:21:21.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (684597) - No such process 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 684597 is not found' 00:21:21.861 Process with pid 684597 is not found 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.861 04:58:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.772 00:21:23.772 real 0m7.224s 00:21:23.772 user 0m16.246s 00:21:23.772 sys 0m1.337s 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.772 ************************************ 00:21:23.772 END TEST nvmf_shutdown_tc3 00:21:23.772 ************************************ 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:23.772 ************************************ 00:21:23.772 START TEST nvmf_shutdown_tc4 00:21:23.772 ************************************ 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.772 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:23.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:23.773 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:23.773 Found net devices under 0000:af:00.0: cvl_0_0 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:23.773 Found net devices under 0000:af:00.1: cvl_0_1 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.773 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.033 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.033 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.033 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.033 04:58:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:21:24.033 00:21:24.033 --- 10.0.0.2 ping statistics --- 00:21:24.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.033 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:21:24.033 00:21:24.033 --- 10.0.0.1 ping statistics --- 00:21:24.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.033 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=685961 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 685961 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 685961 ']' 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.033 04:58:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.292 [2024-12-10 04:58:15.175812] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:24.292 [2024-12-10 04:58:15.175859] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.292 [2024-12-10 04:58:15.256828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.292 [2024-12-10 04:58:15.298265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.292 [2024-12-10 04:58:15.298299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.292 [2024-12-10 04:58:15.298308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.292 [2024-12-10 04:58:15.298314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.292 [2024-12-10 04:58:15.298319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.292 [2024-12-10 04:58:15.299662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.292 [2024-12-10 04:58:15.299766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.292 [2024-12-10 04:58:15.299791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.292 [2024-12-10 04:58:15.299792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.231 [2024-12-10 04:58:16.054771] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.231 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.231 Malloc1 00:21:25.231 [2024-12-10 04:58:16.163341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.231 Malloc2 00:21:25.231 Malloc3 00:21:25.231 Malloc4 00:21:25.231 Malloc5 00:21:25.231 Malloc6 00:21:25.491 Malloc7 00:21:25.491 Malloc8 00:21:25.491 Malloc9 00:21:25.491 Malloc10 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=686283 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:25.491 04:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:25.751 [2024-12-10 04:58:16.673298] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 685961 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 685961 ']' 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 685961 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 685961 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 685961' 00:21:31.150 killing process with pid 685961 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 685961 00:21:31.150 04:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 685961 00:21:31.150 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.670516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.670564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 starting I/O failed: -6 00:21:31.151 [2024-12-10 04:58:21.670573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.670587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.670594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.670601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.670607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.670614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136bc60 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 [2024-12-10 04:58:21.671207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 starting I/O failed: -6 00:21:31.151 [2024-12-10 04:58:21.671248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c130 is same with the state(6) to be set 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 starting I/O failed: -6 00:21:31.151 [2024-12-10 04:58:21.671887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 Write completed with error (sct=0, sc=8) 00:21:31.151 [2024-12-10 04:58:21.671903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.151 [2024-12-10 04:58:21.671935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.151 [2024-12-10 04:58:21.671943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.671950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c600 is same with the state(6) to be set 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 [2024-12-10 04:58:21.672982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.152 [2024-12-10 04:58:21.672987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110bfe0 is same with the state(6) to be set 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 [2024-12-10 04:58:21.673447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c4d0 is same with the state(6) to be set 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 [2024-12-10 04:58:21.673467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c4d0 is same with the state(6) to be set 00:21:31.152 starting I/O failed: -6 00:21:31.152 [2024-12-10 04:58:21.673474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c4d0 is same with the state(6) to be set 00:21:31.152 [2024-12-10 04:58:21.673481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c4d0 is same with the state(6) to be set 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 [2024-12-10 04:58:21.673488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c4d0 is same with the state(6) to be set 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.152 starting I/O failed: -6 00:21:31.152 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.673761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c9c0 is same with the state(6) to be set 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.673774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c9c0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.673781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c9c0 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.673788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c9c0 is same with starting I/O failed: -6 00:21:31.153 the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.673796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c9c0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.673803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c9c0 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.674103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.674123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecc70 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cfa0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cfa0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cfa0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cfa0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cfa0 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cfa0 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.153 NVMe io qpair process completion error 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 [2024-12-10 04:58:21.674982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.674994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.675001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.675008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.675014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.675021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 [2024-12-10 04:58:21.675027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 [2024-12-10 04:58:21.675034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d470 is same with the state(6) to be set 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 Write completed with error (sct=0, sc=8) 00:21:31.153 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 [2024-12-10 04:58:21.675416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d940 is same with the state(6) to be set 00:21:31.154 [2024-12-10 04:58:21.675428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d940 is same with Write completed with error (sct=0, sc=8) 00:21:31.154 the state(6) to be set 00:21:31.154 [2024-12-10 04:58:21.675437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d940 is same with the state(6) to be set 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 [2024-12-10 04:58:21.675443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d940 is same with the state(6) to be set 00:21:31.154 [2024-12-10 04:58:21.675450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d940 is same with the state(6) to be set 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 [2024-12-10 04:58:21.675457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d940 is same with the state(6) to be set 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 [2024-12-10 04:58:21.675503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 [2024-12-10 04:58:21.676420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 [2024-12-10 04:58:21.677393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.154 Write completed with error (sct=0, sc=8) 00:21:31.154 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 [2024-12-10 04:58:21.679192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.155 NVMe io qpair process completion error 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 [2024-12-10 04:58:21.682393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110faa0 is same with the state(6) to be set 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 [2024-12-10 04:58:21.682417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110faa0 is same with the state(6) to be set 00:21:31.155 [2024-12-10 04:58:21.682425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110faa0 is same with the state(6) to be set 00:21:31.155 [2024-12-10 04:58:21.682433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110faa0 is same with the state(6) to be set 00:21:31.155 [2024-12-10 04:58:21.682440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110faa0 is same with the state(6) to be set 00:21:31.155 [2024-12-10 04:58:21.682438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.155 [2024-12-10 04:58:21.682450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110faa0 is same with the state(6) to be set 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.155 starting I/O failed: -6 00:21:31.155 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.682736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110ff90 is same with the state(6) to be set 00:21:31.156 [2024-12-10 04:58:21.682756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110ff90 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.682763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110ff90 is same with the state(6) to be set 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.682770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110ff90 is same with the state(6) to be set 00:21:31.156 [2024-12-10 04:58:21.682777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110ff90 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.683058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.683080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 [2024-12-10 04:58:21.683087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.683093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.683100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 [2024-12-10 04:58:21.683107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.683114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110460 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.683331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.683491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f5d0 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.683511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f5d0 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.683518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f5d0 is same with the state(6) to be set 00:21:31.156 starting I/O failed: -6 00:21:31.156 [2024-12-10 04:58:21.683526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f5d0 is same with the state(6) to be set 00:21:31.156 [2024-12-10 04:58:21.683533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f5d0 is same with the state(6) to be set 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 [2024-12-10 04:58:21.683539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f5d0 is same with the state(6) to be set 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.156 starting I/O failed: -6 00:21:31.156 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 [2024-12-10 04:58:21.684341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 [2024-12-10 04:58:21.685774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.157 NVMe io qpair process completion error 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 Write completed with error (sct=0, sc=8) 00:21:31.157 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 [2024-12-10 04:58:21.686749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.158 starting I/O failed: -6 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 [2024-12-10 04:58:21.687674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 [2024-12-10 04:58:21.688662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.158 Write completed with error (sct=0, sc=8) 00:21:31.158 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 [2024-12-10 04:58:21.690757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.159 NVMe io qpair process completion error 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 [2024-12-10 04:58:21.691753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 starting I/O failed: -6 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.159 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 [2024-12-10 04:58:21.692642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 [2024-12-10 04:58:21.693642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.160 Write completed with error (sct=0, sc=8) 00:21:31.160 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 [2024-12-10 04:58:21.697327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.161 NVMe io qpair process completion error 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 [2024-12-10 04:58:21.698360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.161 starting I/O failed: -6 00:21:31.161 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 [2024-12-10 04:58:21.699184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 [2024-12-10 04:58:21.700200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.162 starting I/O failed: -6 00:21:31.162 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 [2024-12-10 04:58:21.703163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.163 NVMe io qpair process completion error 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 [2024-12-10 04:58:21.704080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.163 starting I/O failed: -6 00:21:31.163 starting I/O failed: -6 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 starting I/O failed: -6 00:21:31.163 Write completed with error (sct=0, sc=8) 00:21:31.163 [2024-12-10 04:58:21.704986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 [2024-12-10 04:58:21.706031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.164 Write completed with error (sct=0, sc=8) 00:21:31.164 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 [2024-12-10 04:58:21.707815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.165 NVMe io qpair process completion error 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 [2024-12-10 04:58:21.708828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 [2024-12-10 04:58:21.709752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 Write completed with error (sct=0, sc=8) 00:21:31.165 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 [2024-12-10 04:58:21.710758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 [2024-12-10 04:58:21.712528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.166 NVMe io qpair process completion error 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.166 starting I/O failed: -6 00:21:31.166 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 [2024-12-10 04:58:21.713441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 [2024-12-10 04:58:21.714330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.167 starting I/O failed: -6 00:21:31.167 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 [2024-12-10 04:58:21.715331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 [2024-12-10 04:58:21.720840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.168 NVMe io qpair process completion error 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 starting I/O failed: -6 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.168 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 [2024-12-10 04:58:21.721875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 [2024-12-10 04:58:21.722771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 [2024-12-10 04:58:21.723762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.169 starting I/O failed: -6 00:21:31.169 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 Write completed with error (sct=0, sc=8) 00:21:31.170 starting I/O failed: -6 00:21:31.170 [2024-12-10 04:58:21.727986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.170 NVMe io qpair process completion error 00:21:31.170 Initializing NVMe Controllers 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:31.170 Controller IO queue size 128, less than required. 00:21:31.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:31.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:31.170 Initialization complete. Launching workers. 00:21:31.170 ======================================================== 00:21:31.170 Latency(us) 00:21:31.170 Device Information : IOPS MiB/s Average min max 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2199.09 94.49 58210.46 895.24 110249.00 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2197.36 94.42 57677.04 717.62 109392.44 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2193.46 94.25 57789.89 851.15 107956.52 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2187.17 93.98 58499.22 917.62 114299.98 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2204.29 94.72 57502.04 887.17 104862.25 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2189.77 94.09 57892.07 689.27 103425.92 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2232.23 95.92 56810.56 928.00 102571.20 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2210.57 94.99 57401.64 812.13 104583.40 00:21:31.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2167.03 93.11 58581.73 682.46 108008.09 00:21:31.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2177.86 93.58 58303.57 876.35 110536.29 00:21:31.171 ======================================================== 00:21:31.171 Total : 21958.82 943.54 57863.11 682.46 114299.98 00:21:31.171 00:21:31.171 [2024-12-10 04:58:21.731003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82e560 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x830900 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x830720 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82fa70 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82ebc0 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82eef0 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82f740 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82f410 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x830ae0 is same with the state(6) to be set 00:21:31.171 [2024-12-10 04:58:21.731292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82e890 is same with the state(6) to be set 00:21:31.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:31.171 04:58:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 686283 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 686283 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 686283 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.110 rmmod nvme_tcp 00:21:32.110 rmmod nvme_fabrics 00:21:32.110 rmmod nvme_keyring 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 685961 ']' 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 685961 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 685961 ']' 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 685961 00:21:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (685961) - No such process 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 685961 is not found' 00:21:32.110 Process with pid 685961 is not found 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.110 04:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.648 00:21:34.648 real 0m10.420s 00:21:34.648 user 0m27.653s 00:21:34.648 sys 0m5.160s 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:34.648 ************************************ 00:21:34.648 END TEST nvmf_shutdown_tc4 00:21:34.648 ************************************ 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:34.648 00:21:34.648 real 0m40.937s 00:21:34.648 user 1m40.486s 00:21:34.648 sys 0m13.919s 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.648 ************************************ 00:21:34.648 END TEST nvmf_shutdown 00:21:34.648 ************************************ 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.648 ************************************ 00:21:34.648 START TEST nvmf_nsid 00:21:34.648 ************************************ 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:34.648 * Looking for test storage... 00:21:34.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.648 --rc genhtml_branch_coverage=1 00:21:34.648 --rc genhtml_function_coverage=1 00:21:34.648 --rc genhtml_legend=1 00:21:34.648 --rc geninfo_all_blocks=1 00:21:34.648 --rc geninfo_unexecuted_blocks=1 00:21:34.648 00:21:34.648 ' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.648 --rc genhtml_branch_coverage=1 00:21:34.648 --rc genhtml_function_coverage=1 00:21:34.648 --rc genhtml_legend=1 00:21:34.648 --rc geninfo_all_blocks=1 00:21:34.648 --rc geninfo_unexecuted_blocks=1 00:21:34.648 00:21:34.648 ' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.648 --rc genhtml_branch_coverage=1 00:21:34.648 --rc genhtml_function_coverage=1 00:21:34.648 --rc genhtml_legend=1 00:21:34.648 --rc geninfo_all_blocks=1 00:21:34.648 --rc geninfo_unexecuted_blocks=1 00:21:34.648 00:21:34.648 ' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.648 --rc genhtml_branch_coverage=1 00:21:34.648 --rc genhtml_function_coverage=1 00:21:34.648 --rc genhtml_legend=1 00:21:34.648 --rc geninfo_all_blocks=1 00:21:34.648 --rc geninfo_unexecuted_blocks=1 00:21:34.648 00:21:34.648 ' 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.648 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.649 04:58:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:41.220 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.220 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:41.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:41.221 Found net devices under 0000:af:00.0: cvl_0_0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:41.221 Found net devices under 0000:af:00.1: cvl_0_1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:21:41.221 00:21:41.221 --- 10.0.0.2 ping statistics --- 00:21:41.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.221 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:41.221 00:21:41.221 --- 10.0.0.1 ping statistics --- 00:21:41.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.221 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=690712 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 690712 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 690712 ']' 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:41.221 [2024-12-10 04:58:31.523415] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:41.221 [2024-12-10 04:58:31.523459] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.221 [2024-12-10 04:58:31.601048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.221 [2024-12-10 04:58:31.638210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.221 [2024-12-10 04:58:31.638243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.221 [2024-12-10 04:58:31.638252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.221 [2024-12-10 04:58:31.638258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.221 [2024-12-10 04:58:31.638262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.221 [2024-12-10 04:58:31.638765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=690739 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.221 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=fe8abb75-a0ac-44c0-bff3-b998188fc042 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ad66b55d-938f-44fd-974e-d963e194e548 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=bb9f7311-605c-46a9-ac3d-5f568c21612a 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:41.222 null0 00:21:41.222 null1 00:21:41.222 [2024-12-10 04:58:31.834566] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:41.222 [2024-12-10 04:58:31.834612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690739 ] 00:21:41.222 null2 00:21:41.222 [2024-12-10 04:58:31.838776] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.222 [2024-12-10 04:58:31.862972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 690739 /var/tmp/tgt2.sock 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 690739 ']' 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:41.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.222 04:58:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:41.222 [2024-12-10 04:58:31.907764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.222 [2024-12-10 04:58:31.949746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.222 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.222 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:41.222 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:41.481 [2024-12-10 04:58:32.476052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.481 [2024-12-10 04:58:32.492139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:41.481 nvme0n1 nvme0n2 00:21:41.481 nvme1n1 00:21:41.481 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:41.481 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:41.481 04:58:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:42.858 04:58:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid fe8abb75-a0ac-44c0-bff3-b998188fc042 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fe8abb75a0ac44c0bff3b998188fc042 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FE8ABB75A0AC44C0BFF3B998188FC042 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ FE8ABB75A0AC44C0BFF3B998188FC042 == \F\E\8\A\B\B\7\5\A\0\A\C\4\4\C\0\B\F\F\3\B\9\9\8\1\8\8\F\C\0\4\2 ]] 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ad66b55d-938f-44fd-974e-d963e194e548 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ad66b55d938f44fd974ed963e194e548 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AD66B55D938F44FD974ED963E194E548 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AD66B55D938F44FD974ED963E194E548 == \A\D\6\6\B\5\5\D\9\3\8\F\4\4\F\D\9\7\4\E\D\9\6\3\E\1\9\4\E\5\4\8 ]] 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid bb9f7311-605c-46a9-ac3d-5f568c21612a 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bb9f7311605c46a9ac3d5f568c21612a 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BB9F7311605C46A9AC3D5F568C21612A 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ BB9F7311605C46A9AC3D5F568C21612A == \B\B\9\F\7\3\1\1\6\0\5\C\4\6\A\9\A\C\3\D\5\F\5\6\8\C\2\1\6\1\2\A ]] 00:21:43.794 04:58:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 690739 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 690739 ']' 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 690739 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690739 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690739' 00:21:44.053 killing process with pid 690739 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 690739 00:21:44.053 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 690739 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.312 rmmod nvme_tcp 00:21:44.312 rmmod nvme_fabrics 00:21:44.312 rmmod nvme_keyring 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 690712 ']' 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 690712 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 690712 ']' 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 690712 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:44.312 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690712 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690712' 00:21:44.570 killing process with pid 690712 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 690712 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 690712 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.570 04:58:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.104 04:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.104 00:21:47.104 real 0m12.393s 00:21:47.104 user 0m9.655s 00:21:47.104 sys 0m5.499s 00:21:47.104 04:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.104 04:58:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.104 ************************************ 00:21:47.104 END TEST nvmf_nsid 00:21:47.104 ************************************ 00:21:47.104 04:58:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:47.104 00:21:47.104 real 12m1.920s 00:21:47.104 user 25m47.182s 00:21:47.104 sys 3m42.079s 00:21:47.104 04:58:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.104 04:58:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:47.104 ************************************ 00:21:47.104 END TEST nvmf_target_extra 00:21:47.104 ************************************ 00:21:47.104 04:58:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:47.104 04:58:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.104 04:58:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.104 04:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.104 ************************************ 00:21:47.104 START TEST nvmf_host 00:21:47.104 ************************************ 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:47.104 * Looking for test storage... 00:21:47.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.104 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:47.105 04:58:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:47.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.105 --rc genhtml_branch_coverage=1 00:21:47.105 --rc genhtml_function_coverage=1 00:21:47.105 --rc genhtml_legend=1 00:21:47.105 --rc geninfo_all_blocks=1 00:21:47.105 --rc geninfo_unexecuted_blocks=1 00:21:47.105 00:21:47.105 ' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:47.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.105 --rc genhtml_branch_coverage=1 00:21:47.105 --rc genhtml_function_coverage=1 00:21:47.105 --rc genhtml_legend=1 00:21:47.105 --rc geninfo_all_blocks=1 00:21:47.105 --rc geninfo_unexecuted_blocks=1 00:21:47.105 00:21:47.105 ' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:47.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.105 --rc genhtml_branch_coverage=1 00:21:47.105 --rc genhtml_function_coverage=1 00:21:47.105 --rc genhtml_legend=1 00:21:47.105 --rc geninfo_all_blocks=1 00:21:47.105 --rc geninfo_unexecuted_blocks=1 00:21:47.105 00:21:47.105 ' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:47.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.105 --rc genhtml_branch_coverage=1 00:21:47.105 --rc genhtml_function_coverage=1 00:21:47.105 --rc genhtml_legend=1 00:21:47.105 --rc geninfo_all_blocks=1 00:21:47.105 --rc geninfo_unexecuted_blocks=1 00:21:47.105 00:21:47.105 ' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.105 ************************************ 00:21:47.105 START TEST nvmf_multicontroller 00:21:47.105 ************************************ 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:47.105 * Looking for test storage... 00:21:47.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:47.105 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.106 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:47.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.364 --rc genhtml_branch_coverage=1 00:21:47.364 --rc genhtml_function_coverage=1 00:21:47.364 --rc genhtml_legend=1 00:21:47.364 --rc geninfo_all_blocks=1 00:21:47.364 --rc geninfo_unexecuted_blocks=1 00:21:47.364 00:21:47.364 ' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:47.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.364 --rc genhtml_branch_coverage=1 00:21:47.364 --rc genhtml_function_coverage=1 00:21:47.364 --rc genhtml_legend=1 00:21:47.364 --rc geninfo_all_blocks=1 00:21:47.364 --rc geninfo_unexecuted_blocks=1 00:21:47.364 00:21:47.364 ' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:47.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.364 --rc genhtml_branch_coverage=1 00:21:47.364 --rc genhtml_function_coverage=1 00:21:47.364 --rc genhtml_legend=1 00:21:47.364 --rc geninfo_all_blocks=1 00:21:47.364 --rc geninfo_unexecuted_blocks=1 00:21:47.364 00:21:47.364 ' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:47.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.364 --rc genhtml_branch_coverage=1 00:21:47.364 --rc genhtml_function_coverage=1 00:21:47.364 --rc genhtml_legend=1 00:21:47.364 --rc geninfo_all_blocks=1 00:21:47.364 --rc geninfo_unexecuted_blocks=1 00:21:47.364 00:21:47.364 ' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.364 04:58:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:53.932 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:53.932 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:53.932 Found net devices under 0000:af:00.0: cvl_0_0 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:53.932 Found net devices under 0000:af:00.1: cvl_0_1 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.932 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.933 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.933 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.933 04:58:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:21:53.933 00:21:53.933 --- 10.0.0.2 ping statistics --- 00:21:53.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.933 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:53.933 00:21:53.933 --- 10.0.0.1 ping statistics --- 00:21:53.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.933 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=694975 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 694975 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 694975 ']' 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.933 04:58:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.933 [2024-12-10 04:58:44.193678] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:53.933 [2024-12-10 04:58:44.193735] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.933 [2024-12-10 04:58:44.272672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.933 [2024-12-10 04:58:44.314120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.933 [2024-12-10 04:58:44.314155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.933 [2024-12-10 04:58:44.314162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.933 [2024-12-10 04:58:44.314171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.933 [2024-12-10 04:58:44.314177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.933 [2024-12-10 04:58:44.315467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.933 [2024-12-10 04:58:44.315574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.933 [2024-12-10 04:58:44.315576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.933 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.933 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:53.933 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.933 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.933 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 [2024-12-10 04:58:45.074031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 Malloc0 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 [2024-12-10 04:58:45.140831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 [2024-12-10 04:58:45.148750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 Malloc1 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:54.192 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=695213 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 695213 /var/tmp/bdevperf.sock 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 695213 ']' 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.193 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.451 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.451 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:54.451 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:54.451 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.451 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.711 NVMe0n1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.711 1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.711 request: 00:21:54.711 { 00:21:54.711 "name": "NVMe0", 00:21:54.711 "trtype": "tcp", 00:21:54.711 "traddr": "10.0.0.2", 00:21:54.711 "adrfam": "ipv4", 00:21:54.711 "trsvcid": "4420", 00:21:54.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.711 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:54.711 "hostaddr": "10.0.0.1", 00:21:54.711 "prchk_reftag": false, 00:21:54.711 "prchk_guard": false, 00:21:54.711 "hdgst": false, 00:21:54.711 "ddgst": false, 00:21:54.711 "allow_unrecognized_csi": false, 00:21:54.711 "method": "bdev_nvme_attach_controller", 00:21:54.711 "req_id": 1 00:21:54.711 } 00:21:54.711 Got JSON-RPC error response 00:21:54.711 response: 00:21:54.711 { 00:21:54.711 "code": -114, 00:21:54.711 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:54.711 } 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.711 request: 00:21:54.711 { 00:21:54.711 "name": "NVMe0", 00:21:54.711 "trtype": "tcp", 00:21:54.711 "traddr": "10.0.0.2", 00:21:54.711 "adrfam": "ipv4", 00:21:54.711 "trsvcid": "4420", 00:21:54.711 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:54.711 "hostaddr": "10.0.0.1", 00:21:54.711 "prchk_reftag": false, 00:21:54.711 "prchk_guard": false, 00:21:54.711 "hdgst": false, 00:21:54.711 "ddgst": false, 00:21:54.711 "allow_unrecognized_csi": false, 00:21:54.711 "method": "bdev_nvme_attach_controller", 00:21:54.711 "req_id": 1 00:21:54.711 } 00:21:54.711 Got JSON-RPC error response 00:21:54.711 response: 00:21:54.711 { 00:21:54.711 "code": -114, 00:21:54.711 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:54.711 } 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.711 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.711 request: 00:21:54.711 { 00:21:54.711 "name": "NVMe0", 00:21:54.711 "trtype": "tcp", 00:21:54.711 "traddr": "10.0.0.2", 00:21:54.711 "adrfam": "ipv4", 00:21:54.711 "trsvcid": "4420", 00:21:54.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.711 "hostaddr": "10.0.0.1", 00:21:54.711 "prchk_reftag": false, 00:21:54.711 "prchk_guard": false, 00:21:54.711 "hdgst": false, 00:21:54.711 "ddgst": false, 00:21:54.711 "multipath": "disable", 00:21:54.711 "allow_unrecognized_csi": false, 00:21:54.711 "method": "bdev_nvme_attach_controller", 00:21:54.711 "req_id": 1 00:21:54.712 } 00:21:54.712 Got JSON-RPC error response 00:21:54.712 response: 00:21:54.712 { 00:21:54.712 "code": -114, 00:21:54.712 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:54.712 } 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.712 request: 00:21:54.712 { 00:21:54.712 "name": "NVMe0", 00:21:54.712 "trtype": "tcp", 00:21:54.712 "traddr": "10.0.0.2", 00:21:54.712 "adrfam": "ipv4", 00:21:54.712 "trsvcid": "4420", 00:21:54.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.712 "hostaddr": "10.0.0.1", 00:21:54.712 "prchk_reftag": false, 00:21:54.712 "prchk_guard": false, 00:21:54.712 "hdgst": false, 00:21:54.712 "ddgst": false, 00:21:54.712 "multipath": "failover", 00:21:54.712 "allow_unrecognized_csi": false, 00:21:54.712 "method": "bdev_nvme_attach_controller", 00:21:54.712 "req_id": 1 00:21:54.712 } 00:21:54.712 Got JSON-RPC error response 00:21:54.712 response: 00:21:54.712 { 00:21:54.712 "code": -114, 00:21:54.712 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:54.712 } 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.712 NVMe0n1 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.712 04:58:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.971 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:54.971 04:58:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.348 { 00:21:56.348 "results": [ 00:21:56.348 { 00:21:56.348 "job": "NVMe0n1", 00:21:56.348 "core_mask": "0x1", 00:21:56.348 "workload": "write", 00:21:56.348 "status": "finished", 00:21:56.348 "queue_depth": 128, 00:21:56.348 "io_size": 4096, 00:21:56.348 "runtime": 1.003028, 00:21:56.348 "iops": 25516.735325434584, 00:21:56.348 "mibps": 99.67474736497884, 00:21:56.348 "io_failed": 0, 00:21:56.348 "io_timeout": 0, 00:21:56.348 "avg_latency_us": 5009.978785206355, 00:21:56.348 "min_latency_us": 1451.1542857142856, 00:21:56.348 "max_latency_us": 8613.302857142857 00:21:56.348 } 00:21:56.348 ], 00:21:56.348 "core_count": 1 00:21:56.348 } 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 695213 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 695213 ']' 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 695213 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695213 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695213' 00:21:56.348 killing process with pid 695213 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 695213 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 695213 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:56.348 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:56.348 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:56.348 [2024-12-10 04:58:45.253300] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:21:56.348 [2024-12-10 04:58:45.253349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695213 ] 00:21:56.348 [2024-12-10 04:58:45.325821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.348 [2024-12-10 04:58:45.365461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.348 [2024-12-10 04:58:46.016425] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 614442d7-d8d0-4065-847b-2cce6211649d already exists 00:21:56.348 [2024-12-10 04:58:46.016451] bdev.c:8150:bdev_register: *ERROR*: Unable to add uuid:614442d7-d8d0-4065-847b-2cce6211649d alias for bdev NVMe1n1 00:21:56.348 [2024-12-10 04:58:46.016459] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:56.348 Running I/O for 1 seconds... 00:21:56.348 25466.00 IOPS, 99.48 MiB/s 00:21:56.349 Latency(us) 00:21:56.349 [2024-12-10T03:58:47.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.349 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:56.349 NVMe0n1 : 1.00 25516.74 99.67 0.00 0.00 5009.98 1451.15 8613.30 00:21:56.349 [2024-12-10T03:58:47.486Z] =================================================================================================================== 00:21:56.349 [2024-12-10T03:58:47.486Z] Total : 25516.74 99.67 0.00 0.00 5009.98 1451.15 8613.30 00:21:56.349 Received shutdown signal, test time was about 1.000000 seconds 00:21:56.349 00:21:56.349 Latency(us) 00:21:56.349 [2024-12-10T03:58:47.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.349 [2024-12-10T03:58:47.486Z] =================================================================================================================== 00:21:56.349 [2024-12-10T03:58:47.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.349 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.349 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.349 rmmod nvme_tcp 00:21:56.349 rmmod nvme_fabrics 00:21:56.349 rmmod nvme_keyring 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 694975 ']' 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 694975 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 694975 ']' 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 694975 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 694975 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 694975' 00:21:56.608 killing process with pid 694975 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 694975 00:21:56.608 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 694975 00:21:56.866 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.866 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:56.866 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:56.866 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.867 04:58:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:58.771 00:21:58.771 real 0m11.763s 00:21:58.771 user 0m14.576s 00:21:58.771 sys 0m5.089s 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.771 ************************************ 00:21:58.771 END TEST nvmf_multicontroller 00:21:58.771 ************************************ 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.771 04:58:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.031 ************************************ 00:21:59.031 START TEST nvmf_aer 00:21:59.031 ************************************ 00:21:59.031 04:58:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:59.031 * Looking for test storage... 00:21:59.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.031 04:58:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:59.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.031 --rc genhtml_branch_coverage=1 00:21:59.031 --rc genhtml_function_coverage=1 00:21:59.031 --rc genhtml_legend=1 00:21:59.031 --rc geninfo_all_blocks=1 00:21:59.031 --rc geninfo_unexecuted_blocks=1 00:21:59.031 00:21:59.031 ' 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:59.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.031 --rc genhtml_branch_coverage=1 00:21:59.031 --rc genhtml_function_coverage=1 00:21:59.031 --rc genhtml_legend=1 00:21:59.031 --rc geninfo_all_blocks=1 00:21:59.031 --rc geninfo_unexecuted_blocks=1 00:21:59.031 00:21:59.031 ' 00:21:59.031 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:59.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.031 --rc genhtml_branch_coverage=1 00:21:59.031 --rc genhtml_function_coverage=1 00:21:59.031 --rc genhtml_legend=1 00:21:59.031 --rc geninfo_all_blocks=1 00:21:59.032 --rc geninfo_unexecuted_blocks=1 00:21:59.032 00:21:59.032 ' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:59.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.032 --rc genhtml_branch_coverage=1 00:21:59.032 --rc genhtml_function_coverage=1 00:21:59.032 --rc genhtml_legend=1 00:21:59.032 --rc geninfo_all_blocks=1 00:21:59.032 --rc geninfo_unexecuted_blocks=1 00:21:59.032 00:21:59.032 ' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.032 04:58:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:05.603 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:05.603 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:05.603 Found net devices under 0000:af:00.0: cvl_0_0 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:05.603 Found net devices under 0000:af:00.1: cvl_0_1 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.603 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:22:05.604 00:22:05.604 --- 10.0.0.2 ping statistics --- 00:22:05.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.604 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:05.604 00:22:05.604 --- 10.0.0.1 ping statistics --- 00:22:05.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.604 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.604 04:58:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=699021 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 699021 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 699021 ']' 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.604 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.604 [2024-12-10 04:58:56.069922] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:05.604 [2024-12-10 04:58:56.069973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.604 [2024-12-10 04:58:56.150844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.604 [2024-12-10 04:58:56.191544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.604 [2024-12-10 04:58:56.191584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.604 [2024-12-10 04:58:56.191592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.604 [2024-12-10 04:58:56.191598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.604 [2024-12-10 04:58:56.191603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.604 [2024-12-10 04:58:56.192950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.604 [2024-12-10 04:58:56.193062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.604 [2024-12-10 04:58:56.193145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.604 [2024-12-10 04:58:56.193146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.863 [2024-12-10 04:58:56.943786] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.863 Malloc0 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.863 04:58:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.125 [2024-12-10 04:58:57.005936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.125 [ 00:22:06.125 { 00:22:06.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:06.125 "subtype": "Discovery", 00:22:06.125 "listen_addresses": [], 00:22:06.125 "allow_any_host": true, 00:22:06.125 "hosts": [] 00:22:06.125 }, 00:22:06.125 { 00:22:06.125 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.125 "subtype": "NVMe", 00:22:06.125 "listen_addresses": [ 00:22:06.125 { 00:22:06.125 "trtype": "TCP", 00:22:06.125 "adrfam": "IPv4", 00:22:06.125 "traddr": "10.0.0.2", 00:22:06.125 "trsvcid": "4420" 00:22:06.125 } 00:22:06.125 ], 00:22:06.125 "allow_any_host": true, 00:22:06.125 "hosts": [], 00:22:06.125 "serial_number": "SPDK00000000000001", 00:22:06.125 "model_number": "SPDK bdev Controller", 00:22:06.125 "max_namespaces": 2, 00:22:06.125 "min_cntlid": 1, 00:22:06.125 "max_cntlid": 65519, 00:22:06.125 "namespaces": [ 00:22:06.125 { 00:22:06.125 "nsid": 1, 00:22:06.125 "bdev_name": "Malloc0", 00:22:06.125 "name": "Malloc0", 00:22:06.125 "nguid": "07425FB2DFCC4CD8ABCCEB7E40EA8540", 00:22:06.125 "uuid": "07425fb2-dfcc-4cd8-abcc-eb7e40ea8540" 00:22:06.125 } 00:22:06.125 ] 00:22:06.125 } 00:22:06.125 ] 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=699170 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:06.125 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.383 Malloc1 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:06.383 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.384 Asynchronous Event Request test 00:22:06.384 Attaching to 10.0.0.2 00:22:06.384 Attached to 10.0.0.2 00:22:06.384 Registering asynchronous event callbacks... 00:22:06.384 Starting namespace attribute notice tests for all controllers... 00:22:06.384 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:06.384 aer_cb - Changed Namespace 00:22:06.384 Cleaning up... 00:22:06.384 [ 00:22:06.384 { 00:22:06.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:06.384 "subtype": "Discovery", 00:22:06.384 "listen_addresses": [], 00:22:06.384 "allow_any_host": true, 00:22:06.384 "hosts": [] 00:22:06.384 }, 00:22:06.384 { 00:22:06.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.384 "subtype": "NVMe", 00:22:06.384 "listen_addresses": [ 00:22:06.384 { 00:22:06.384 "trtype": "TCP", 00:22:06.384 "adrfam": "IPv4", 00:22:06.384 "traddr": "10.0.0.2", 00:22:06.384 "trsvcid": "4420" 00:22:06.384 } 00:22:06.384 ], 00:22:06.384 "allow_any_host": true, 00:22:06.384 "hosts": [], 00:22:06.384 "serial_number": "SPDK00000000000001", 00:22:06.384 "model_number": "SPDK bdev Controller", 00:22:06.384 "max_namespaces": 2, 00:22:06.384 "min_cntlid": 1, 00:22:06.384 "max_cntlid": 65519, 00:22:06.384 "namespaces": [ 00:22:06.384 { 00:22:06.384 "nsid": 1, 00:22:06.384 "bdev_name": "Malloc0", 00:22:06.384 "name": "Malloc0", 00:22:06.384 "nguid": "07425FB2DFCC4CD8ABCCEB7E40EA8540", 00:22:06.384 "uuid": "07425fb2-dfcc-4cd8-abcc-eb7e40ea8540" 00:22:06.384 }, 00:22:06.384 { 00:22:06.384 "nsid": 2, 00:22:06.384 "bdev_name": "Malloc1", 00:22:06.384 "name": "Malloc1", 00:22:06.384 "nguid": "250EF7CEB8834EB1AABB06E9D92D1E82", 00:22:06.384 "uuid": "250ef7ce-b883-4eb1-aabb-06e9d92d1e82" 00:22:06.384 } 00:22:06.384 ] 00:22:06.384 } 00:22:06.384 ] 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 699170 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.384 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.384 rmmod nvme_tcp 00:22:06.384 rmmod nvme_fabrics 00:22:06.384 rmmod nvme_keyring 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 699021 ']' 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 699021 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 699021 ']' 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 699021 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 699021 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 699021' 00:22:06.643 killing process with pid 699021 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 699021 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 699021 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.643 04:58:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.179 00:22:09.179 real 0m9.905s 00:22:09.179 user 0m8.076s 00:22:09.179 sys 0m4.842s 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.179 ************************************ 00:22:09.179 END TEST nvmf_aer 00:22:09.179 ************************************ 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.179 ************************************ 00:22:09.179 START TEST nvmf_async_init 00:22:09.179 ************************************ 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:09.179 * Looking for test storage... 00:22:09.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:09.179 04:58:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.179 --rc genhtml_branch_coverage=1 00:22:09.179 --rc genhtml_function_coverage=1 00:22:09.179 --rc genhtml_legend=1 00:22:09.179 --rc geninfo_all_blocks=1 00:22:09.179 --rc geninfo_unexecuted_blocks=1 00:22:09.179 00:22:09.179 ' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.179 --rc genhtml_branch_coverage=1 00:22:09.179 --rc genhtml_function_coverage=1 00:22:09.179 --rc genhtml_legend=1 00:22:09.179 --rc geninfo_all_blocks=1 00:22:09.179 --rc geninfo_unexecuted_blocks=1 00:22:09.179 00:22:09.179 ' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.179 --rc genhtml_branch_coverage=1 00:22:09.179 --rc genhtml_function_coverage=1 00:22:09.179 --rc genhtml_legend=1 00:22:09.179 --rc geninfo_all_blocks=1 00:22:09.179 --rc geninfo_unexecuted_blocks=1 00:22:09.179 00:22:09.179 ' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.179 --rc genhtml_branch_coverage=1 00:22:09.179 --rc genhtml_function_coverage=1 00:22:09.179 --rc genhtml_legend=1 00:22:09.179 --rc geninfo_all_blocks=1 00:22:09.179 --rc geninfo_unexecuted_blocks=1 00:22:09.179 00:22:09.179 ' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.179 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:09.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5186ba9c6f604c948820a1f6482798b6 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.180 04:59:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:15.745 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:15.745 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:15.745 Found net devices under 0000:af:00.0: cvl_0_0 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:15.745 Found net devices under 0000:af:00.1: cvl_0_1 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.745 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:22:15.746 00:22:15.746 --- 10.0.0.2 ping statistics --- 00:22:15.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.746 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:15.746 00:22:15.746 --- 10.0.0.1 ping statistics --- 00:22:15.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.746 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.746 04:59:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=702808 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 702808 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 702808 ']' 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [2024-12-10 04:59:06.068512] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:15.746 [2024-12-10 04:59:06.068557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.746 [2024-12-10 04:59:06.142758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.746 [2024-12-10 04:59:06.180559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.746 [2024-12-10 04:59:06.180592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.746 [2024-12-10 04:59:06.180599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.746 [2024-12-10 04:59:06.180604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.746 [2024-12-10 04:59:06.180609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.746 [2024-12-10 04:59:06.181075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [2024-12-10 04:59:06.324426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 null0 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5186ba9c6f604c948820a1f6482798b6 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [2024-12-10 04:59:06.376702] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 nvme0n1 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [ 00:22:15.746 { 00:22:15.746 "name": "nvme0n1", 00:22:15.746 "aliases": [ 00:22:15.746 "5186ba9c-6f60-4c94-8820-a1f6482798b6" 00:22:15.746 ], 00:22:15.746 "product_name": "NVMe disk", 00:22:15.746 "block_size": 512, 00:22:15.746 "num_blocks": 2097152, 00:22:15.746 "uuid": "5186ba9c-6f60-4c94-8820-a1f6482798b6", 00:22:15.746 "numa_id": 1, 00:22:15.746 "assigned_rate_limits": { 00:22:15.746 "rw_ios_per_sec": 0, 00:22:15.746 "rw_mbytes_per_sec": 0, 00:22:15.747 "r_mbytes_per_sec": 0, 00:22:15.747 "w_mbytes_per_sec": 0 00:22:15.747 }, 00:22:15.747 "claimed": false, 00:22:15.747 "zoned": false, 00:22:15.747 "supported_io_types": { 00:22:15.747 "read": true, 00:22:15.747 "write": true, 00:22:15.747 "unmap": false, 00:22:15.747 "flush": true, 00:22:15.747 "reset": true, 00:22:15.747 "nvme_admin": true, 00:22:15.747 "nvme_io": true, 00:22:15.747 "nvme_io_md": false, 00:22:15.747 "write_zeroes": true, 00:22:15.747 "zcopy": false, 00:22:15.747 "get_zone_info": false, 00:22:15.747 "zone_management": false, 00:22:15.747 "zone_append": false, 00:22:15.747 "compare": true, 00:22:15.747 "compare_and_write": true, 00:22:15.747 "abort": true, 00:22:15.747 "seek_hole": false, 00:22:15.747 "seek_data": false, 00:22:15.747 "copy": true, 00:22:15.747 "nvme_iov_md": false 00:22:15.747 }, 00:22:15.747 "memory_domains": [ 00:22:15.747 { 00:22:15.747 "dma_device_id": "system", 00:22:15.747 "dma_device_type": 1 00:22:15.747 } 00:22:15.747 ], 00:22:15.747 "driver_specific": { 00:22:15.747 "nvme": [ 00:22:15.747 { 00:22:15.747 "trid": { 00:22:15.747 "trtype": "TCP", 00:22:15.747 "adrfam": "IPv4", 00:22:15.747 "traddr": "10.0.0.2", 00:22:15.747 "trsvcid": "4420", 00:22:15.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:15.747 }, 00:22:15.747 "ctrlr_data": { 00:22:15.747 "cntlid": 1, 00:22:15.747 "vendor_id": "0x8086", 00:22:15.747 "model_number": "SPDK bdev Controller", 00:22:15.747 "serial_number": "00000000000000000000", 00:22:15.747 "firmware_revision": "25.01", 00:22:15.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.747 "oacs": { 00:22:15.747 "security": 0, 00:22:15.747 "format": 0, 00:22:15.747 "firmware": 0, 00:22:15.747 "ns_manage": 0 00:22:15.747 }, 00:22:15.747 "multi_ctrlr": true, 00:22:15.747 "ana_reporting": false 00:22:15.747 }, 00:22:15.747 "vs": { 00:22:15.747 "nvme_version": "1.3" 00:22:15.747 }, 00:22:15.747 "ns_data": { 00:22:15.747 "id": 1, 00:22:15.747 "can_share": true 00:22:15.747 } 00:22:15.747 } 00:22:15.747 ], 00:22:15.747 "mp_policy": "active_passive" 00:22:15.747 } 00:22:15.747 } 00:22:15.747 ] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 [2024-12-10 04:59:06.641233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:15.747 [2024-12-10 04:59:06.641288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12211c0 (9): Bad file descriptor 00:22:15.747 [2024-12-10 04:59:06.773246] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 [ 00:22:15.747 { 00:22:15.747 "name": "nvme0n1", 00:22:15.747 "aliases": [ 00:22:15.747 "5186ba9c-6f60-4c94-8820-a1f6482798b6" 00:22:15.747 ], 00:22:15.747 "product_name": "NVMe disk", 00:22:15.747 "block_size": 512, 00:22:15.747 "num_blocks": 2097152, 00:22:15.747 "uuid": "5186ba9c-6f60-4c94-8820-a1f6482798b6", 00:22:15.747 "numa_id": 1, 00:22:15.747 "assigned_rate_limits": { 00:22:15.747 "rw_ios_per_sec": 0, 00:22:15.747 "rw_mbytes_per_sec": 0, 00:22:15.747 "r_mbytes_per_sec": 0, 00:22:15.747 "w_mbytes_per_sec": 0 00:22:15.747 }, 00:22:15.747 "claimed": false, 00:22:15.747 "zoned": false, 00:22:15.747 "supported_io_types": { 00:22:15.747 "read": true, 00:22:15.747 "write": true, 00:22:15.747 "unmap": false, 00:22:15.747 "flush": true, 00:22:15.747 "reset": true, 00:22:15.747 "nvme_admin": true, 00:22:15.747 "nvme_io": true, 00:22:15.747 "nvme_io_md": false, 00:22:15.747 "write_zeroes": true, 00:22:15.747 "zcopy": false, 00:22:15.747 "get_zone_info": false, 00:22:15.747 "zone_management": false, 00:22:15.747 "zone_append": false, 00:22:15.747 "compare": true, 00:22:15.747 "compare_and_write": true, 00:22:15.747 "abort": true, 00:22:15.747 "seek_hole": false, 00:22:15.747 "seek_data": false, 00:22:15.747 "copy": true, 00:22:15.747 "nvme_iov_md": false 00:22:15.747 }, 00:22:15.747 "memory_domains": [ 00:22:15.747 { 00:22:15.747 "dma_device_id": "system", 00:22:15.747 "dma_device_type": 1 00:22:15.747 } 00:22:15.747 ], 00:22:15.747 "driver_specific": { 00:22:15.747 "nvme": [ 00:22:15.747 { 00:22:15.747 "trid": { 00:22:15.747 "trtype": "TCP", 00:22:15.747 "adrfam": "IPv4", 00:22:15.747 "traddr": "10.0.0.2", 00:22:15.747 "trsvcid": "4420", 00:22:15.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:15.747 }, 00:22:15.747 "ctrlr_data": { 00:22:15.747 "cntlid": 2, 00:22:15.747 "vendor_id": "0x8086", 00:22:15.747 "model_number": "SPDK bdev Controller", 00:22:15.747 "serial_number": "00000000000000000000", 00:22:15.747 "firmware_revision": "25.01", 00:22:15.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.747 "oacs": { 00:22:15.747 "security": 0, 00:22:15.747 "format": 0, 00:22:15.747 "firmware": 0, 00:22:15.747 "ns_manage": 0 00:22:15.747 }, 00:22:15.747 "multi_ctrlr": true, 00:22:15.747 "ana_reporting": false 00:22:15.747 }, 00:22:15.747 "vs": { 00:22:15.747 "nvme_version": "1.3" 00:22:15.747 }, 00:22:15.747 "ns_data": { 00:22:15.747 "id": 1, 00:22:15.747 "can_share": true 00:22:15.747 } 00:22:15.747 } 00:22:15.747 ], 00:22:15.747 "mp_policy": "active_passive" 00:22:15.747 } 00:22:15.747 } 00:22:15.747 ] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FGmNVZf4ro 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FGmNVZf4ro 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.FGmNVZf4ro 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 [2024-12-10 04:59:06.849842] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.747 [2024-12-10 04:59:06.849948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 [2024-12-10 04:59:06.869909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.007 nvme0n1 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.007 [ 00:22:16.007 { 00:22:16.007 "name": "nvme0n1", 00:22:16.007 "aliases": [ 00:22:16.007 "5186ba9c-6f60-4c94-8820-a1f6482798b6" 00:22:16.007 ], 00:22:16.007 "product_name": "NVMe disk", 00:22:16.007 "block_size": 512, 00:22:16.007 "num_blocks": 2097152, 00:22:16.007 "uuid": "5186ba9c-6f60-4c94-8820-a1f6482798b6", 00:22:16.007 "numa_id": 1, 00:22:16.007 "assigned_rate_limits": { 00:22:16.007 "rw_ios_per_sec": 0, 00:22:16.007 "rw_mbytes_per_sec": 0, 00:22:16.007 "r_mbytes_per_sec": 0, 00:22:16.007 "w_mbytes_per_sec": 0 00:22:16.007 }, 00:22:16.007 "claimed": false, 00:22:16.007 "zoned": false, 00:22:16.007 "supported_io_types": { 00:22:16.007 "read": true, 00:22:16.007 "write": true, 00:22:16.007 "unmap": false, 00:22:16.007 "flush": true, 00:22:16.007 "reset": true, 00:22:16.007 "nvme_admin": true, 00:22:16.007 "nvme_io": true, 00:22:16.007 "nvme_io_md": false, 00:22:16.007 "write_zeroes": true, 00:22:16.007 "zcopy": false, 00:22:16.007 "get_zone_info": false, 00:22:16.007 "zone_management": false, 00:22:16.007 "zone_append": false, 00:22:16.007 "compare": true, 00:22:16.007 "compare_and_write": true, 00:22:16.007 "abort": true, 00:22:16.007 "seek_hole": false, 00:22:16.007 "seek_data": false, 00:22:16.007 "copy": true, 00:22:16.007 "nvme_iov_md": false 00:22:16.007 }, 00:22:16.007 "memory_domains": [ 00:22:16.007 { 00:22:16.007 "dma_device_id": "system", 00:22:16.007 "dma_device_type": 1 00:22:16.007 } 00:22:16.007 ], 00:22:16.007 "driver_specific": { 00:22:16.007 "nvme": [ 00:22:16.007 { 00:22:16.007 "trid": { 00:22:16.007 "trtype": "TCP", 00:22:16.007 "adrfam": "IPv4", 00:22:16.007 "traddr": "10.0.0.2", 00:22:16.007 "trsvcid": "4421", 00:22:16.007 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:16.007 }, 00:22:16.007 "ctrlr_data": { 00:22:16.007 "cntlid": 3, 00:22:16.007 "vendor_id": "0x8086", 00:22:16.007 "model_number": "SPDK bdev Controller", 00:22:16.007 "serial_number": "00000000000000000000", 00:22:16.007 "firmware_revision": "25.01", 00:22:16.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:16.007 "oacs": { 00:22:16.007 "security": 0, 00:22:16.007 "format": 0, 00:22:16.007 "firmware": 0, 00:22:16.007 "ns_manage": 0 00:22:16.007 }, 00:22:16.007 "multi_ctrlr": true, 00:22:16.007 "ana_reporting": false 00:22:16.007 }, 00:22:16.007 "vs": { 00:22:16.007 "nvme_version": "1.3" 00:22:16.007 }, 00:22:16.007 "ns_data": { 00:22:16.007 "id": 1, 00:22:16.007 "can_share": true 00:22:16.007 } 00:22:16.007 } 00:22:16.007 ], 00:22:16.007 "mp_policy": "active_passive" 00:22:16.007 } 00:22:16.007 } 00:22:16.007 ] 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.FGmNVZf4ro 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.007 04:59:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.007 rmmod nvme_tcp 00:22:16.007 rmmod nvme_fabrics 00:22:16.007 rmmod nvme_keyring 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 702808 ']' 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 702808 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 702808 ']' 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 702808 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 702808 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 702808' 00:22:16.007 killing process with pid 702808 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 702808 00:22:16.007 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 702808 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.267 04:59:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.186 04:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.444 00:22:18.444 real 0m9.434s 00:22:18.444 user 0m3.137s 00:22:18.444 sys 0m4.721s 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:18.444 ************************************ 00:22:18.444 END TEST nvmf_async_init 00:22:18.444 ************************************ 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.444 ************************************ 00:22:18.444 START TEST dma 00:22:18.444 ************************************ 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:18.444 * Looking for test storage... 00:22:18.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.444 --rc genhtml_branch_coverage=1 00:22:18.444 --rc genhtml_function_coverage=1 00:22:18.444 --rc genhtml_legend=1 00:22:18.444 --rc geninfo_all_blocks=1 00:22:18.444 --rc geninfo_unexecuted_blocks=1 00:22:18.444 00:22:18.444 ' 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.444 --rc genhtml_branch_coverage=1 00:22:18.444 --rc genhtml_function_coverage=1 00:22:18.444 --rc genhtml_legend=1 00:22:18.444 --rc geninfo_all_blocks=1 00:22:18.444 --rc geninfo_unexecuted_blocks=1 00:22:18.444 00:22:18.444 ' 00:22:18.444 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:18.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.445 --rc genhtml_branch_coverage=1 00:22:18.445 --rc genhtml_function_coverage=1 00:22:18.445 --rc genhtml_legend=1 00:22:18.445 --rc geninfo_all_blocks=1 00:22:18.445 --rc geninfo_unexecuted_blocks=1 00:22:18.445 00:22:18.445 ' 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:18.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.445 --rc genhtml_branch_coverage=1 00:22:18.445 --rc genhtml_function_coverage=1 00:22:18.445 --rc genhtml_legend=1 00:22:18.445 --rc geninfo_all_blocks=1 00:22:18.445 --rc geninfo_unexecuted_blocks=1 00:22:18.445 00:22:18.445 ' 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.445 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:18.704 00:22:18.704 real 0m0.203s 00:22:18.704 user 0m0.122s 00:22:18.704 sys 0m0.093s 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:18.704 ************************************ 00:22:18.704 END TEST dma 00:22:18.704 ************************************ 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.704 ************************************ 00:22:18.704 START TEST nvmf_identify 00:22:18.704 ************************************ 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:18.704 * Looking for test storage... 00:22:18.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.704 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:18.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.964 --rc genhtml_branch_coverage=1 00:22:18.964 --rc genhtml_function_coverage=1 00:22:18.964 --rc genhtml_legend=1 00:22:18.964 --rc geninfo_all_blocks=1 00:22:18.964 --rc geninfo_unexecuted_blocks=1 00:22:18.964 00:22:18.964 ' 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:18.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.964 --rc genhtml_branch_coverage=1 00:22:18.964 --rc genhtml_function_coverage=1 00:22:18.964 --rc genhtml_legend=1 00:22:18.964 --rc geninfo_all_blocks=1 00:22:18.964 --rc geninfo_unexecuted_blocks=1 00:22:18.964 00:22:18.964 ' 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:18.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.964 --rc genhtml_branch_coverage=1 00:22:18.964 --rc genhtml_function_coverage=1 00:22:18.964 --rc genhtml_legend=1 00:22:18.964 --rc geninfo_all_blocks=1 00:22:18.964 --rc geninfo_unexecuted_blocks=1 00:22:18.964 00:22:18.964 ' 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:18.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.964 --rc genhtml_branch_coverage=1 00:22:18.964 --rc genhtml_function_coverage=1 00:22:18.964 --rc genhtml_legend=1 00:22:18.964 --rc geninfo_all_blocks=1 00:22:18.964 --rc geninfo_unexecuted_blocks=1 00:22:18.964 00:22:18.964 ' 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.964 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:18.965 04:59:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:25.532 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:25.532 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:25.532 Found net devices under 0000:af:00.0: cvl_0_0 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:25.532 Found net devices under 0000:af:00.1: cvl_0_1 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.532 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:22:25.533 00:22:25.533 --- 10.0.0.2 ping statistics --- 00:22:25.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.533 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:25.533 00:22:25.533 --- 10.0.0.1 ping statistics --- 00:22:25.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.533 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=706520 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 706520 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 706520 ']' 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.533 04:59:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.533 [2024-12-10 04:59:15.887133] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:25.533 [2024-12-10 04:59:15.887195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.533 [2024-12-10 04:59:15.965665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.533 [2024-12-10 04:59:16.010572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.533 [2024-12-10 04:59:16.010607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.533 [2024-12-10 04:59:16.010615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.533 [2024-12-10 04:59:16.010621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.533 [2024-12-10 04:59:16.010626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.533 [2024-12-10 04:59:16.011946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.533 [2024-12-10 04:59:16.012057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.533 [2024-12-10 04:59:16.012164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.533 [2024-12-10 04:59:16.012179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 [2024-12-10 04:59:16.723173] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 Malloc0 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 [2024-12-10 04:59:16.839435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.792 [ 00:22:25.792 { 00:22:25.792 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:25.792 "subtype": "Discovery", 00:22:25.792 "listen_addresses": [ 00:22:25.792 { 00:22:25.792 "trtype": "TCP", 00:22:25.792 "adrfam": "IPv4", 00:22:25.792 "traddr": "10.0.0.2", 00:22:25.792 "trsvcid": "4420" 00:22:25.792 } 00:22:25.792 ], 00:22:25.792 "allow_any_host": true, 00:22:25.792 "hosts": [] 00:22:25.792 }, 00:22:25.792 { 00:22:25.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.792 "subtype": "NVMe", 00:22:25.792 "listen_addresses": [ 00:22:25.792 { 00:22:25.792 "trtype": "TCP", 00:22:25.792 "adrfam": "IPv4", 00:22:25.792 "traddr": "10.0.0.2", 00:22:25.792 "trsvcid": "4420" 00:22:25.792 } 00:22:25.792 ], 00:22:25.792 "allow_any_host": true, 00:22:25.792 "hosts": [], 00:22:25.792 "serial_number": "SPDK00000000000001", 00:22:25.792 "model_number": "SPDK bdev Controller", 00:22:25.792 "max_namespaces": 32, 00:22:25.792 "min_cntlid": 1, 00:22:25.792 "max_cntlid": 65519, 00:22:25.792 "namespaces": [ 00:22:25.792 { 00:22:25.792 "nsid": 1, 00:22:25.792 "bdev_name": "Malloc0", 00:22:25.792 "name": "Malloc0", 00:22:25.792 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:25.792 "eui64": "ABCDEF0123456789", 00:22:25.792 "uuid": "9c8e131e-dd6a-4e7a-aabc-e9cddb061128" 00:22:25.792 } 00:22:25.792 ] 00:22:25.792 } 00:22:25.792 ] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.792 04:59:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:25.792 [2024-12-10 04:59:16.892667] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:25.792 [2024-12-10 04:59:16.892700] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706636 ] 00:22:26.054 [2024-12-10 04:59:16.934328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:26.054 [2024-12-10 04:59:16.934372] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:26.054 [2024-12-10 04:59:16.934377] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:26.054 [2024-12-10 04:59:16.934392] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:26.054 [2024-12-10 04:59:16.934400] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:26.054 [2024-12-10 04:59:16.934924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:26.054 [2024-12-10 04:59:16.934952] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdd2690 0 00:22:26.054 [2024-12-10 04:59:16.945180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:26.054 [2024-12-10 04:59:16.945193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:26.054 [2024-12-10 04:59:16.945200] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:26.054 [2024-12-10 04:59:16.945203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:26.054 [2024-12-10 04:59:16.945238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.945243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.945247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.054 [2024-12-10 04:59:16.945258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:26.054 [2024-12-10 04:59:16.945274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.054 [2024-12-10 04:59:16.956176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.054 [2024-12-10 04:59:16.956185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.054 [2024-12-10 04:59:16.956188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.054 [2024-12-10 04:59:16.956203] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:26.054 [2024-12-10 04:59:16.956209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:26.054 [2024-12-10 04:59:16.956214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:26.054 [2024-12-10 04:59:16.956227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.054 [2024-12-10 04:59:16.956241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-12-10 04:59:16.956253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.054 [2024-12-10 04:59:16.956408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.054 [2024-12-10 04:59:16.956414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.054 [2024-12-10 04:59:16.956417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.054 [2024-12-10 04:59:16.956427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:26.054 [2024-12-10 04:59:16.956433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:26.054 [2024-12-10 04:59:16.956439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.054 [2024-12-10 04:59:16.956451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-12-10 04:59:16.956460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.054 [2024-12-10 04:59:16.956524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.054 [2024-12-10 04:59:16.956529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.054 [2024-12-10 04:59:16.956532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.054 [2024-12-10 04:59:16.956540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:26.054 [2024-12-10 04:59:16.956546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:26.054 [2024-12-10 04:59:16.956552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.054 [2024-12-10 04:59:16.956564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-12-10 04:59:16.956573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.054 [2024-12-10 04:59:16.956636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.054 [2024-12-10 04:59:16.956642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.054 [2024-12-10 04:59:16.956645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.054 [2024-12-10 04:59:16.956652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:26.054 [2024-12-10 04:59:16.956660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.054 [2024-12-10 04:59:16.956672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-12-10 04:59:16.956682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.054 [2024-12-10 04:59:16.956747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.054 [2024-12-10 04:59:16.956752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.054 [2024-12-10 04:59:16.956755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.054 [2024-12-10 04:59:16.956758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.054 [2024-12-10 04:59:16.956762] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:26.054 [2024-12-10 04:59:16.956766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:26.054 [2024-12-10 04:59:16.956774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:26.055 [2024-12-10 04:59:16.956881] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:26.055 [2024-12-10 04:59:16.956886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:26.055 [2024-12-10 04:59:16.956892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.956895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.956898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.956904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.055 [2024-12-10 04:59:16.956914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.055 [2024-12-10 04:59:16.956973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.055 [2024-12-10 04:59:16.956979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.055 [2024-12-10 04:59:16.956982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.956985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.055 [2024-12-10 04:59:16.956988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:26.055 [2024-12-10 04:59:16.956996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.957008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.055 [2024-12-10 04:59:16.957019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.055 [2024-12-10 04:59:16.957078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.055 [2024-12-10 04:59:16.957084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.055 [2024-12-10 04:59:16.957087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.055 [2024-12-10 04:59:16.957094] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:26.055 [2024-12-10 04:59:16.957098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:26.055 [2024-12-10 04:59:16.957104] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:26.055 [2024-12-10 04:59:16.957113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:26.055 [2024-12-10 04:59:16.957120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.957129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.055 [2024-12-10 04:59:16.957138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.055 [2024-12-10 04:59:16.957230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.055 [2024-12-10 04:59:16.957236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.055 [2024-12-10 04:59:16.957239] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957242] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd2690): datao=0, datal=4096, cccid=0 00:22:26.055 [2024-12-10 04:59:16.957246] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe34100) on tqpair(0xdd2690): expected_datao=0, payload_size=4096 00:22:26.055 [2024-12-10 04:59:16.957250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957263] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.957267] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.055 [2024-12-10 04:59:16.998302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.055 [2024-12-10 04:59:16.998305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.055 [2024-12-10 04:59:16.998319] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:26.055 [2024-12-10 04:59:16.998324] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:26.055 [2024-12-10 04:59:16.998328] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:26.055 [2024-12-10 04:59:16.998332] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:26.055 [2024-12-10 04:59:16.998336] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:26.055 [2024-12-10 04:59:16.998340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:26.055 [2024-12-10 04:59:16.998349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:26.055 [2024-12-10 04:59:16.998357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:26.055 [2024-12-10 04:59:16.998382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.055 [2024-12-10 04:59:16.998446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.055 [2024-12-10 04:59:16.998452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.055 [2024-12-10 04:59:16.998455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.055 [2024-12-10 04:59:16.998464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-12-10 04:59:16.998480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-12-10 04:59:16.998496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-12-10 04:59:16.998511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-12-10 04:59:16.998526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:26.055 [2024-12-10 04:59:16.998536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:26.055 [2024-12-10 04:59:16.998541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.055 [2024-12-10 04:59:16.998560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34100, cid 0, qid 0 00:22:26.055 [2024-12-10 04:59:16.998565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34280, cid 1, qid 0 00:22:26.055 [2024-12-10 04:59:16.998568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34400, cid 2, qid 0 00:22:26.055 [2024-12-10 04:59:16.998574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.055 [2024-12-10 04:59:16.998578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34700, cid 4, qid 0 00:22:26.055 [2024-12-10 04:59:16.998670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.055 [2024-12-10 04:59:16.998676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.055 [2024-12-10 04:59:16.998678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34700) on tqpair=0xdd2690 00:22:26.055 [2024-12-10 04:59:16.998686] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:26.055 [2024-12-10 04:59:16.998690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:26.055 [2024-12-10 04:59:16.998699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd2690) 00:22:26.055 [2024-12-10 04:59:16.998708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.055 [2024-12-10 04:59:16.998717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34700, cid 4, qid 0 00:22:26.055 [2024-12-10 04:59:16.998788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.055 [2024-12-10 04:59:16.998794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.055 [2024-12-10 04:59:16.998797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd2690): datao=0, datal=4096, cccid=4 00:22:26.055 [2024-12-10 04:59:16.998804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe34700) on tqpair(0xdd2690): expected_datao=0, payload_size=4096 00:22:26.055 [2024-12-10 04:59:16.998807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.055 [2024-12-10 04:59:16.998813] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.998816] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.998826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.056 [2024-12-10 04:59:16.998831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.056 [2024-12-10 04:59:16.998834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.998837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34700) on tqpair=0xdd2690 00:22:26.056 [2024-12-10 04:59:16.998847] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:26.056 [2024-12-10 04:59:16.998865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.998869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd2690) 00:22:26.056 [2024-12-10 04:59:16.998874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.056 [2024-12-10 04:59:16.998879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.998882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.998885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdd2690) 00:22:26.056 [2024-12-10 04:59:16.998890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.056 [2024-12-10 04:59:16.998903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34700, cid 4, qid 0 00:22:26.056 [2024-12-10 04:59:16.998908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34880, cid 5, qid 0 00:22:26.056 [2024-12-10 04:59:16.999008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.056 [2024-12-10 04:59:16.999013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.056 [2024-12-10 04:59:16.999016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.999019] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd2690): datao=0, datal=1024, cccid=4 00:22:26.056 [2024-12-10 04:59:16.999023] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe34700) on tqpair(0xdd2690): expected_datao=0, payload_size=1024 00:22:26.056 [2024-12-10 04:59:16.999026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.999032] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.999035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.999039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.056 [2024-12-10 04:59:16.999044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.056 [2024-12-10 04:59:16.999047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:16.999050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34880) on tqpair=0xdd2690 00:22:26.056 [2024-12-10 04:59:17.039360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.056 [2024-12-10 04:59:17.039374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.056 [2024-12-10 04:59:17.039377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34700) on tqpair=0xdd2690 00:22:26.056 [2024-12-10 04:59:17.039392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd2690) 00:22:26.056 [2024-12-10 04:59:17.039402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.056 [2024-12-10 04:59:17.039417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34700, cid 4, qid 0 00:22:26.056 [2024-12-10 04:59:17.039547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.056 [2024-12-10 04:59:17.039553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.056 [2024-12-10 04:59:17.039556] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd2690): datao=0, datal=3072, cccid=4 00:22:26.056 [2024-12-10 04:59:17.039563] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe34700) on tqpair(0xdd2690): expected_datao=0, payload_size=3072 00:22:26.056 [2024-12-10 04:59:17.039566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039572] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039575] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.056 [2024-12-10 04:59:17.039601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.056 [2024-12-10 04:59:17.039604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34700) on tqpair=0xdd2690 00:22:26.056 [2024-12-10 04:59:17.039615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd2690) 00:22:26.056 [2024-12-10 04:59:17.039623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.056 [2024-12-10 04:59:17.039637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34700, cid 4, qid 0 00:22:26.056 [2024-12-10 04:59:17.039708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.056 [2024-12-10 04:59:17.039716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.056 [2024-12-10 04:59:17.039719] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039722] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd2690): datao=0, datal=8, cccid=4 00:22:26.056 [2024-12-10 04:59:17.039726] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe34700) on tqpair(0xdd2690): expected_datao=0, payload_size=8 00:22:26.056 [2024-12-10 04:59:17.039730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039735] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.039738] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.080348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.056 [2024-12-10 04:59:17.080358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.056 [2024-12-10 04:59:17.080361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.056 [2024-12-10 04:59:17.080365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34700) on tqpair=0xdd2690 00:22:26.056 ===================================================== 00:22:26.056 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:26.056 ===================================================== 00:22:26.056 Controller Capabilities/Features 00:22:26.056 ================================ 00:22:26.056 Vendor ID: 0000 00:22:26.056 Subsystem Vendor ID: 0000 00:22:26.056 Serial Number: .................... 00:22:26.056 Model Number: ........................................ 00:22:26.056 Firmware Version: 25.01 00:22:26.056 Recommended Arb Burst: 0 00:22:26.056 IEEE OUI Identifier: 00 00 00 00:22:26.056 Multi-path I/O 00:22:26.056 May have multiple subsystem ports: No 00:22:26.056 May have multiple controllers: No 00:22:26.056 Associated with SR-IOV VF: No 00:22:26.056 Max Data Transfer Size: 131072 00:22:26.056 Max Number of Namespaces: 0 00:22:26.056 Max Number of I/O Queues: 1024 00:22:26.056 NVMe Specification Version (VS): 1.3 00:22:26.056 NVMe Specification Version (Identify): 1.3 00:22:26.056 Maximum Queue Entries: 128 00:22:26.056 Contiguous Queues Required: Yes 00:22:26.056 Arbitration Mechanisms Supported 00:22:26.056 Weighted Round Robin: Not Supported 00:22:26.056 Vendor Specific: Not Supported 00:22:26.056 Reset Timeout: 15000 ms 00:22:26.056 Doorbell Stride: 4 bytes 00:22:26.056 NVM Subsystem Reset: Not Supported 00:22:26.056 Command Sets Supported 00:22:26.056 NVM Command Set: Supported 00:22:26.056 Boot Partition: Not Supported 00:22:26.056 Memory Page Size Minimum: 4096 bytes 00:22:26.056 Memory Page Size Maximum: 4096 bytes 00:22:26.056 Persistent Memory Region: Not Supported 00:22:26.056 Optional Asynchronous Events Supported 00:22:26.056 Namespace Attribute Notices: Not Supported 00:22:26.056 Firmware Activation Notices: Not Supported 00:22:26.056 ANA Change Notices: Not Supported 00:22:26.056 PLE Aggregate Log Change Notices: Not Supported 00:22:26.056 LBA Status Info Alert Notices: Not Supported 00:22:26.056 EGE Aggregate Log Change Notices: Not Supported 00:22:26.056 Normal NVM Subsystem Shutdown event: Not Supported 00:22:26.056 Zone Descriptor Change Notices: Not Supported 00:22:26.056 Discovery Log Change Notices: Supported 00:22:26.056 Controller Attributes 00:22:26.056 128-bit Host Identifier: Not Supported 00:22:26.056 Non-Operational Permissive Mode: Not Supported 00:22:26.056 NVM Sets: Not Supported 00:22:26.056 Read Recovery Levels: Not Supported 00:22:26.056 Endurance Groups: Not Supported 00:22:26.056 Predictable Latency Mode: Not Supported 00:22:26.056 Traffic Based Keep ALive: Not Supported 00:22:26.056 Namespace Granularity: Not Supported 00:22:26.056 SQ Associations: Not Supported 00:22:26.056 UUID List: Not Supported 00:22:26.056 Multi-Domain Subsystem: Not Supported 00:22:26.056 Fixed Capacity Management: Not Supported 00:22:26.056 Variable Capacity Management: Not Supported 00:22:26.056 Delete Endurance Group: Not Supported 00:22:26.056 Delete NVM Set: Not Supported 00:22:26.056 Extended LBA Formats Supported: Not Supported 00:22:26.056 Flexible Data Placement Supported: Not Supported 00:22:26.056 00:22:26.056 Controller Memory Buffer Support 00:22:26.056 ================================ 00:22:26.056 Supported: No 00:22:26.056 00:22:26.056 Persistent Memory Region Support 00:22:26.056 ================================ 00:22:26.056 Supported: No 00:22:26.056 00:22:26.056 Admin Command Set Attributes 00:22:26.056 ============================ 00:22:26.056 Security Send/Receive: Not Supported 00:22:26.056 Format NVM: Not Supported 00:22:26.056 Firmware Activate/Download: Not Supported 00:22:26.056 Namespace Management: Not Supported 00:22:26.056 Device Self-Test: Not Supported 00:22:26.056 Directives: Not Supported 00:22:26.056 NVMe-MI: Not Supported 00:22:26.057 Virtualization Management: Not Supported 00:22:26.057 Doorbell Buffer Config: Not Supported 00:22:26.057 Get LBA Status Capability: Not Supported 00:22:26.057 Command & Feature Lockdown Capability: Not Supported 00:22:26.057 Abort Command Limit: 1 00:22:26.057 Async Event Request Limit: 4 00:22:26.057 Number of Firmware Slots: N/A 00:22:26.057 Firmware Slot 1 Read-Only: N/A 00:22:26.057 Firmware Activation Without Reset: N/A 00:22:26.057 Multiple Update Detection Support: N/A 00:22:26.057 Firmware Update Granularity: No Information Provided 00:22:26.057 Per-Namespace SMART Log: No 00:22:26.057 Asymmetric Namespace Access Log Page: Not Supported 00:22:26.057 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:26.057 Command Effects Log Page: Not Supported 00:22:26.057 Get Log Page Extended Data: Supported 00:22:26.057 Telemetry Log Pages: Not Supported 00:22:26.057 Persistent Event Log Pages: Not Supported 00:22:26.057 Supported Log Pages Log Page: May Support 00:22:26.057 Commands Supported & Effects Log Page: Not Supported 00:22:26.057 Feature Identifiers & Effects Log Page:May Support 00:22:26.057 NVMe-MI Commands & Effects Log Page: May Support 00:22:26.057 Data Area 4 for Telemetry Log: Not Supported 00:22:26.057 Error Log Page Entries Supported: 128 00:22:26.057 Keep Alive: Not Supported 00:22:26.057 00:22:26.057 NVM Command Set Attributes 00:22:26.057 ========================== 00:22:26.057 Submission Queue Entry Size 00:22:26.057 Max: 1 00:22:26.057 Min: 1 00:22:26.057 Completion Queue Entry Size 00:22:26.057 Max: 1 00:22:26.057 Min: 1 00:22:26.057 Number of Namespaces: 0 00:22:26.057 Compare Command: Not Supported 00:22:26.057 Write Uncorrectable Command: Not Supported 00:22:26.057 Dataset Management Command: Not Supported 00:22:26.057 Write Zeroes Command: Not Supported 00:22:26.057 Set Features Save Field: Not Supported 00:22:26.057 Reservations: Not Supported 00:22:26.057 Timestamp: Not Supported 00:22:26.057 Copy: Not Supported 00:22:26.057 Volatile Write Cache: Not Present 00:22:26.057 Atomic Write Unit (Normal): 1 00:22:26.057 Atomic Write Unit (PFail): 1 00:22:26.057 Atomic Compare & Write Unit: 1 00:22:26.057 Fused Compare & Write: Supported 00:22:26.057 Scatter-Gather List 00:22:26.057 SGL Command Set: Supported 00:22:26.057 SGL Keyed: Supported 00:22:26.057 SGL Bit Bucket Descriptor: Not Supported 00:22:26.057 SGL Metadata Pointer: Not Supported 00:22:26.057 Oversized SGL: Not Supported 00:22:26.057 SGL Metadata Address: Not Supported 00:22:26.057 SGL Offset: Supported 00:22:26.057 Transport SGL Data Block: Not Supported 00:22:26.057 Replay Protected Memory Block: Not Supported 00:22:26.057 00:22:26.057 Firmware Slot Information 00:22:26.057 ========================= 00:22:26.057 Active slot: 0 00:22:26.057 00:22:26.057 00:22:26.057 Error Log 00:22:26.057 ========= 00:22:26.057 00:22:26.057 Active Namespaces 00:22:26.057 ================= 00:22:26.057 Discovery Log Page 00:22:26.057 ================== 00:22:26.057 Generation Counter: 2 00:22:26.057 Number of Records: 2 00:22:26.057 Record Format: 0 00:22:26.057 00:22:26.057 Discovery Log Entry 0 00:22:26.057 ---------------------- 00:22:26.057 Transport Type: 3 (TCP) 00:22:26.057 Address Family: 1 (IPv4) 00:22:26.057 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:26.057 Entry Flags: 00:22:26.057 Duplicate Returned Information: 1 00:22:26.057 Explicit Persistent Connection Support for Discovery: 1 00:22:26.057 Transport Requirements: 00:22:26.057 Secure Channel: Not Required 00:22:26.057 Port ID: 0 (0x0000) 00:22:26.057 Controller ID: 65535 (0xffff) 00:22:26.057 Admin Max SQ Size: 128 00:22:26.057 Transport Service Identifier: 4420 00:22:26.057 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:26.057 Transport Address: 10.0.0.2 00:22:26.057 Discovery Log Entry 1 00:22:26.057 ---------------------- 00:22:26.057 Transport Type: 3 (TCP) 00:22:26.057 Address Family: 1 (IPv4) 00:22:26.057 Subsystem Type: 2 (NVM Subsystem) 00:22:26.057 Entry Flags: 00:22:26.057 Duplicate Returned Information: 0 00:22:26.057 Explicit Persistent Connection Support for Discovery: 0 00:22:26.057 Transport Requirements: 00:22:26.057 Secure Channel: Not Required 00:22:26.057 Port ID: 0 (0x0000) 00:22:26.057 Controller ID: 65535 (0xffff) 00:22:26.057 Admin Max SQ Size: 128 00:22:26.057 Transport Service Identifier: 4420 00:22:26.057 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:26.057 Transport Address: 10.0.0.2 [2024-12-10 04:59:17.080447] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:26.057 [2024-12-10 04:59:17.080457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34100) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.057 [2024-12-10 04:59:17.080467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34280) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.057 [2024-12-10 04:59:17.080476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34400) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.057 [2024-12-10 04:59:17.080484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.057 [2024-12-10 04:59:17.080496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.057 [2024-12-10 04:59:17.080510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.057 [2024-12-10 04:59:17.080522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.057 [2024-12-10 04:59:17.080582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.057 [2024-12-10 04:59:17.080587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.057 [2024-12-10 04:59:17.080590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.057 [2024-12-10 04:59:17.080610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.057 [2024-12-10 04:59:17.080623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.057 [2024-12-10 04:59:17.080702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.057 [2024-12-10 04:59:17.080709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.057 [2024-12-10 04:59:17.080712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080719] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:26.057 [2024-12-10 04:59:17.080723] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:26.057 [2024-12-10 04:59:17.080731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.057 [2024-12-10 04:59:17.080743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.057 [2024-12-10 04:59:17.080752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.057 [2024-12-10 04:59:17.080813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.057 [2024-12-10 04:59:17.080819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.057 [2024-12-10 04:59:17.080822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.057 [2024-12-10 04:59:17.080845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.057 [2024-12-10 04:59:17.080854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.057 [2024-12-10 04:59:17.080913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.057 [2024-12-10 04:59:17.080919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.057 [2024-12-10 04:59:17.080922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.057 [2024-12-10 04:59:17.080933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.057 [2024-12-10 04:59:17.080939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.058 [2024-12-10 04:59:17.080945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-10 04:59:17.080953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.058 [2024-12-10 04:59:17.081015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.081020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.081023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.081027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.058 [2024-12-10 04:59:17.081034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.081038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.081041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.058 [2024-12-10 04:59:17.081046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-10 04:59:17.081057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.058 [2024-12-10 04:59:17.081115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.081121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.081124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.081127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.058 [2024-12-10 04:59:17.081134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.081138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.081141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.058 [2024-12-10 04:59:17.081146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-10 04:59:17.081155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.058 [2024-12-10 04:59:17.085174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.085183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.085186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.085189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.058 [2024-12-10 04:59:17.085198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.085201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.085204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd2690) 00:22:26.058 [2024-12-10 04:59:17.085210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-10 04:59:17.085220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe34580, cid 3, qid 0 00:22:26.058 [2024-12-10 04:59:17.085370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.085375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.085378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.085381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe34580) on tqpair=0xdd2690 00:22:26.058 [2024-12-10 04:59:17.085387] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:26.058 00:22:26.058 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:26.058 [2024-12-10 04:59:17.124371] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:26.058 [2024-12-10 04:59:17.124420] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706699 ] 00:22:26.058 [2024-12-10 04:59:17.163376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:26.058 [2024-12-10 04:59:17.163415] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:26.058 [2024-12-10 04:59:17.163419] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:26.058 [2024-12-10 04:59:17.163432] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:26.058 [2024-12-10 04:59:17.163442] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:26.058 [2024-12-10 04:59:17.167320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:26.058 [2024-12-10 04:59:17.167347] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb5a690 0 00:22:26.058 [2024-12-10 04:59:17.174178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:26.058 [2024-12-10 04:59:17.174191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:26.058 [2024-12-10 04:59:17.174197] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:26.058 [2024-12-10 04:59:17.174200] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:26.058 [2024-12-10 04:59:17.174225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.174230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.174233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.058 [2024-12-10 04:59:17.174243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:26.058 [2024-12-10 04:59:17.174260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.058 [2024-12-10 04:59:17.182177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.182186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.182189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.058 [2024-12-10 04:59:17.182200] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:26.058 [2024-12-10 04:59:17.182205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:26.058 [2024-12-10 04:59:17.182210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:26.058 [2024-12-10 04:59:17.182222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.058 [2024-12-10 04:59:17.182235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-10 04:59:17.182248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.058 [2024-12-10 04:59:17.182374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.182379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.182382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.058 [2024-12-10 04:59:17.182392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:26.058 [2024-12-10 04:59:17.182399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:26.058 [2024-12-10 04:59:17.182405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.058 [2024-12-10 04:59:17.182417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-10 04:59:17.182426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.058 [2024-12-10 04:59:17.182496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.058 [2024-12-10 04:59:17.182504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.058 [2024-12-10 04:59:17.182507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.058 [2024-12-10 04:59:17.182515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:26.058 [2024-12-10 04:59:17.182522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:26.058 [2024-12-10 04:59:17.182527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.058 [2024-12-10 04:59:17.182534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.058 [2024-12-10 04:59:17.182539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.059 [2024-12-10 04:59:17.182549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.182612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.059 [2024-12-10 04:59:17.182618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.059 [2024-12-10 04:59:17.182621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.059 [2024-12-10 04:59:17.182628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:26.059 [2024-12-10 04:59:17.182636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.182648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.059 [2024-12-10 04:59:17.182657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.182729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.059 [2024-12-10 04:59:17.182736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.059 [2024-12-10 04:59:17.182741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.059 [2024-12-10 04:59:17.182751] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:26.059 [2024-12-10 04:59:17.182757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:26.059 [2024-12-10 04:59:17.182764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:26.059 [2024-12-10 04:59:17.182873] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:26.059 [2024-12-10 04:59:17.182877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:26.059 [2024-12-10 04:59:17.182884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.182895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.059 [2024-12-10 04:59:17.182909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.182977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.059 [2024-12-10 04:59:17.182982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.059 [2024-12-10 04:59:17.182985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.182989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.059 [2024-12-10 04:59:17.182992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:26.059 [2024-12-10 04:59:17.183001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.059 [2024-12-10 04:59:17.183022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.183089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.059 [2024-12-10 04:59:17.183094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.059 [2024-12-10 04:59:17.183097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.059 [2024-12-10 04:59:17.183104] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:26.059 [2024-12-10 04:59:17.183109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:26.059 [2024-12-10 04:59:17.183116] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:26.059 [2024-12-10 04:59:17.183122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:26.059 [2024-12-10 04:59:17.183129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.059 [2024-12-10 04:59:17.183148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.183251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.059 [2024-12-10 04:59:17.183256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.059 [2024-12-10 04:59:17.183259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183263] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=4096, cccid=0 00:22:26.059 [2024-12-10 04:59:17.183267] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbc100) on tqpair(0xb5a690): expected_datao=0, payload_size=4096 00:22:26.059 [2024-12-10 04:59:17.183270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183277] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183280] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.059 [2024-12-10 04:59:17.183297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.059 [2024-12-10 04:59:17.183300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.059 [2024-12-10 04:59:17.183313] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:26.059 [2024-12-10 04:59:17.183319] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:26.059 [2024-12-10 04:59:17.183323] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:26.059 [2024-12-10 04:59:17.183326] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:26.059 [2024-12-10 04:59:17.183330] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:26.059 [2024-12-10 04:59:17.183334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:26.059 [2024-12-10 04:59:17.183342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:26.059 [2024-12-10 04:59:17.183348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:26.059 [2024-12-10 04:59:17.183372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.183434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.059 [2024-12-10 04:59:17.183440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.059 [2024-12-10 04:59:17.183443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.059 [2024-12-10 04:59:17.183451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.059 [2024-12-10 04:59:17.183469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.059 [2024-12-10 04:59:17.183486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.059 [2024-12-10 04:59:17.183504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.059 [2024-12-10 04:59:17.183521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:26.059 [2024-12-10 04:59:17.183531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:26.059 [2024-12-10 04:59:17.183538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.059 [2024-12-10 04:59:17.183541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.059 [2024-12-10 04:59:17.183546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.059 [2024-12-10 04:59:17.183557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc100, cid 0, qid 0 00:22:26.059 [2024-12-10 04:59:17.183563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc280, cid 1, qid 0 00:22:26.059 [2024-12-10 04:59:17.183567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc400, cid 2, qid 0 00:22:26.059 [2024-12-10 04:59:17.183572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.059 [2024-12-10 04:59:17.183576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.322 [2024-12-10 04:59:17.183663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.322 [2024-12-10 04:59:17.183669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.322 [2024-12-10 04:59:17.183674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.322 [2024-12-10 04:59:17.183679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.322 [2024-12-10 04:59:17.183684] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:26.322 [2024-12-10 04:59:17.183688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:26.322 [2024-12-10 04:59:17.183695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:26.322 [2024-12-10 04:59:17.183701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:26.322 [2024-12-10 04:59:17.183706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.322 [2024-12-10 04:59:17.183710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.322 [2024-12-10 04:59:17.183713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.322 [2024-12-10 04:59:17.183718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:26.322 [2024-12-10 04:59:17.183728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.322 [2024-12-10 04:59:17.183787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.322 [2024-12-10 04:59:17.183793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.322 [2024-12-10 04:59:17.183796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.322 [2024-12-10 04:59:17.183800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.322 [2024-12-10 04:59:17.183853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:26.322 [2024-12-10 04:59:17.183863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:26.322 [2024-12-10 04:59:17.183869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.322 [2024-12-10 04:59:17.183872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.322 [2024-12-10 04:59:17.183878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.322 [2024-12-10 04:59:17.183890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.322 [2024-12-10 04:59:17.183957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.322 [2024-12-10 04:59:17.183964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.322 [2024-12-10 04:59:17.183967] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.183970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=4096, cccid=4 00:22:26.323 [2024-12-10 04:59:17.183973] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbc700) on tqpair(0xb5a690): expected_datao=0, payload_size=4096 00:22:26.323 [2024-12-10 04:59:17.183977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.183988] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.183991] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.184032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.184035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.184046] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:26.323 [2024-12-10 04:59:17.184058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.184081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.184091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.323 [2024-12-10 04:59:17.184179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.323 [2024-12-10 04:59:17.184186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.323 [2024-12-10 04:59:17.184189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=4096, cccid=4 00:22:26.323 [2024-12-10 04:59:17.184196] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbc700) on tqpair(0xb5a690): expected_datao=0, payload_size=4096 00:22:26.323 [2024-12-10 04:59:17.184200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184214] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.184243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.184246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.184260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.184286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.184297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.323 [2024-12-10 04:59:17.184365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.323 [2024-12-10 04:59:17.184371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.323 [2024-12-10 04:59:17.184374] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184377] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=4096, cccid=4 00:22:26.323 [2024-12-10 04:59:17.184380] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbc700) on tqpair(0xb5a690): expected_datao=0, payload_size=4096 00:22:26.323 [2024-12-10 04:59:17.184384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184394] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.184425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.184428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.184437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184471] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:26.323 [2024-12-10 04:59:17.184475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:26.323 [2024-12-10 04:59:17.184479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:26.323 [2024-12-10 04:59:17.184491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.184500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.184505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.184517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.323 [2024-12-10 04:59:17.184531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.323 [2024-12-10 04:59:17.184536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc880, cid 5, qid 0 00:22:26.323 [2024-12-10 04:59:17.184614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.184619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.184621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.184630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.184635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.184638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc880) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.184649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.184653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.184658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.184668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc880, cid 5, qid 0 00:22:26.323 [2024-12-10 04:59:17.188175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.188183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.188186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc880) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.188199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.188208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.188219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc880, cid 5, qid 0 00:22:26.323 [2024-12-10 04:59:17.188358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.188363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.188366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc880) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.188377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.188386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.188395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc880, cid 5, qid 0 00:22:26.323 [2024-12-10 04:59:17.188464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.323 [2024-12-10 04:59:17.188469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.323 [2024-12-10 04:59:17.188472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc880) on tqpair=0xb5a690 00:22:26.323 [2024-12-10 04:59:17.188489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.188500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.323 [2024-12-10 04:59:17.188507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.323 [2024-12-10 04:59:17.188510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5a690) 00:22:26.323 [2024-12-10 04:59:17.188515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.324 [2024-12-10 04:59:17.188521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb5a690) 00:22:26.324 [2024-12-10 04:59:17.188530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.324 [2024-12-10 04:59:17.188536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb5a690) 00:22:26.324 [2024-12-10 04:59:17.188544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.324 [2024-12-10 04:59:17.188555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc880, cid 5, qid 0 00:22:26.324 [2024-12-10 04:59:17.188559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc700, cid 4, qid 0 00:22:26.324 [2024-12-10 04:59:17.188563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbca00, cid 6, qid 0 00:22:26.324 [2024-12-10 04:59:17.188567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbcb80, cid 7, qid 0 00:22:26.324 [2024-12-10 04:59:17.188696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.324 [2024-12-10 04:59:17.188702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.324 [2024-12-10 04:59:17.188705] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188708] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=8192, cccid=5 00:22:26.324 [2024-12-10 04:59:17.188712] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbc880) on tqpair(0xb5a690): expected_datao=0, payload_size=8192 00:22:26.324 [2024-12-10 04:59:17.188716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188727] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188731] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.324 [2024-12-10 04:59:17.188744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.324 [2024-12-10 04:59:17.188747] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188750] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=512, cccid=4 00:22:26.324 [2024-12-10 04:59:17.188754] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbc700) on tqpair(0xb5a690): expected_datao=0, payload_size=512 00:22:26.324 [2024-12-10 04:59:17.188757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188763] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188766] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.324 [2024-12-10 04:59:17.188775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.324 [2024-12-10 04:59:17.188778] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188781] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=512, cccid=6 00:22:26.324 [2024-12-10 04:59:17.188787] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbca00) on tqpair(0xb5a690): expected_datao=0, payload_size=512 00:22:26.324 [2024-12-10 04:59:17.188790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188799] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:26.324 [2024-12-10 04:59:17.188808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:26.324 [2024-12-10 04:59:17.188811] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188814] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5a690): datao=0, datal=4096, cccid=7 00:22:26.324 [2024-12-10 04:59:17.188818] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbbcb80) on tqpair(0xb5a690): expected_datao=0, payload_size=4096 00:22:26.324 [2024-12-10 04:59:17.188822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188827] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188830] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.324 [2024-12-10 04:59:17.188842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.324 [2024-12-10 04:59:17.188845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc880) on tqpair=0xb5a690 00:22:26.324 [2024-12-10 04:59:17.188858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.324 [2024-12-10 04:59:17.188863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.324 [2024-12-10 04:59:17.188866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc700) on tqpair=0xb5a690 00:22:26.324 [2024-12-10 04:59:17.188877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.324 [2024-12-10 04:59:17.188882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.324 [2024-12-10 04:59:17.188885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbca00) on tqpair=0xb5a690 00:22:26.324 [2024-12-10 04:59:17.188894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.324 [2024-12-10 04:59:17.188898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.324 [2024-12-10 04:59:17.188902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.324 [2024-12-10 04:59:17.188905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbcb80) on tqpair=0xb5a690 00:22:26.324 ===================================================== 00:22:26.324 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.324 ===================================================== 00:22:26.324 Controller Capabilities/Features 00:22:26.324 ================================ 00:22:26.324 Vendor ID: 8086 00:22:26.324 Subsystem Vendor ID: 8086 00:22:26.324 Serial Number: SPDK00000000000001 00:22:26.324 Model Number: SPDK bdev Controller 00:22:26.324 Firmware Version: 25.01 00:22:26.324 Recommended Arb Burst: 6 00:22:26.324 IEEE OUI Identifier: e4 d2 5c 00:22:26.324 Multi-path I/O 00:22:26.324 May have multiple subsystem ports: Yes 00:22:26.324 May have multiple controllers: Yes 00:22:26.324 Associated with SR-IOV VF: No 00:22:26.324 Max Data Transfer Size: 131072 00:22:26.324 Max Number of Namespaces: 32 00:22:26.324 Max Number of I/O Queues: 127 00:22:26.324 NVMe Specification Version (VS): 1.3 00:22:26.324 NVMe Specification Version (Identify): 1.3 00:22:26.324 Maximum Queue Entries: 128 00:22:26.324 Contiguous Queues Required: Yes 00:22:26.324 Arbitration Mechanisms Supported 00:22:26.324 Weighted Round Robin: Not Supported 00:22:26.324 Vendor Specific: Not Supported 00:22:26.324 Reset Timeout: 15000 ms 00:22:26.324 Doorbell Stride: 4 bytes 00:22:26.324 NVM Subsystem Reset: Not Supported 00:22:26.324 Command Sets Supported 00:22:26.324 NVM Command Set: Supported 00:22:26.324 Boot Partition: Not Supported 00:22:26.324 Memory Page Size Minimum: 4096 bytes 00:22:26.324 Memory Page Size Maximum: 4096 bytes 00:22:26.324 Persistent Memory Region: Not Supported 00:22:26.324 Optional Asynchronous Events Supported 00:22:26.324 Namespace Attribute Notices: Supported 00:22:26.324 Firmware Activation Notices: Not Supported 00:22:26.324 ANA Change Notices: Not Supported 00:22:26.324 PLE Aggregate Log Change Notices: Not Supported 00:22:26.324 LBA Status Info Alert Notices: Not Supported 00:22:26.324 EGE Aggregate Log Change Notices: Not Supported 00:22:26.324 Normal NVM Subsystem Shutdown event: Not Supported 00:22:26.324 Zone Descriptor Change Notices: Not Supported 00:22:26.324 Discovery Log Change Notices: Not Supported 00:22:26.324 Controller Attributes 00:22:26.324 128-bit Host Identifier: Supported 00:22:26.324 Non-Operational Permissive Mode: Not Supported 00:22:26.324 NVM Sets: Not Supported 00:22:26.324 Read Recovery Levels: Not Supported 00:22:26.324 Endurance Groups: Not Supported 00:22:26.324 Predictable Latency Mode: Not Supported 00:22:26.324 Traffic Based Keep ALive: Not Supported 00:22:26.324 Namespace Granularity: Not Supported 00:22:26.324 SQ Associations: Not Supported 00:22:26.324 UUID List: Not Supported 00:22:26.324 Multi-Domain Subsystem: Not Supported 00:22:26.324 Fixed Capacity Management: Not Supported 00:22:26.324 Variable Capacity Management: Not Supported 00:22:26.324 Delete Endurance Group: Not Supported 00:22:26.324 Delete NVM Set: Not Supported 00:22:26.324 Extended LBA Formats Supported: Not Supported 00:22:26.324 Flexible Data Placement Supported: Not Supported 00:22:26.324 00:22:26.324 Controller Memory Buffer Support 00:22:26.324 ================================ 00:22:26.324 Supported: No 00:22:26.324 00:22:26.324 Persistent Memory Region Support 00:22:26.324 ================================ 00:22:26.324 Supported: No 00:22:26.324 00:22:26.324 Admin Command Set Attributes 00:22:26.324 ============================ 00:22:26.324 Security Send/Receive: Not Supported 00:22:26.324 Format NVM: Not Supported 00:22:26.324 Firmware Activate/Download: Not Supported 00:22:26.324 Namespace Management: Not Supported 00:22:26.324 Device Self-Test: Not Supported 00:22:26.324 Directives: Not Supported 00:22:26.324 NVMe-MI: Not Supported 00:22:26.324 Virtualization Management: Not Supported 00:22:26.324 Doorbell Buffer Config: Not Supported 00:22:26.324 Get LBA Status Capability: Not Supported 00:22:26.324 Command & Feature Lockdown Capability: Not Supported 00:22:26.324 Abort Command Limit: 4 00:22:26.324 Async Event Request Limit: 4 00:22:26.324 Number of Firmware Slots: N/A 00:22:26.324 Firmware Slot 1 Read-Only: N/A 00:22:26.324 Firmware Activation Without Reset: N/A 00:22:26.324 Multiple Update Detection Support: N/A 00:22:26.324 Firmware Update Granularity: No Information Provided 00:22:26.324 Per-Namespace SMART Log: No 00:22:26.325 Asymmetric Namespace Access Log Page: Not Supported 00:22:26.325 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:26.325 Command Effects Log Page: Supported 00:22:26.325 Get Log Page Extended Data: Supported 00:22:26.325 Telemetry Log Pages: Not Supported 00:22:26.325 Persistent Event Log Pages: Not Supported 00:22:26.325 Supported Log Pages Log Page: May Support 00:22:26.325 Commands Supported & Effects Log Page: Not Supported 00:22:26.325 Feature Identifiers & Effects Log Page:May Support 00:22:26.325 NVMe-MI Commands & Effects Log Page: May Support 00:22:26.325 Data Area 4 for Telemetry Log: Not Supported 00:22:26.325 Error Log Page Entries Supported: 128 00:22:26.325 Keep Alive: Supported 00:22:26.325 Keep Alive Granularity: 10000 ms 00:22:26.325 00:22:26.325 NVM Command Set Attributes 00:22:26.325 ========================== 00:22:26.325 Submission Queue Entry Size 00:22:26.325 Max: 64 00:22:26.325 Min: 64 00:22:26.325 Completion Queue Entry Size 00:22:26.325 Max: 16 00:22:26.325 Min: 16 00:22:26.325 Number of Namespaces: 32 00:22:26.325 Compare Command: Supported 00:22:26.325 Write Uncorrectable Command: Not Supported 00:22:26.325 Dataset Management Command: Supported 00:22:26.325 Write Zeroes Command: Supported 00:22:26.325 Set Features Save Field: Not Supported 00:22:26.325 Reservations: Supported 00:22:26.325 Timestamp: Not Supported 00:22:26.325 Copy: Supported 00:22:26.325 Volatile Write Cache: Present 00:22:26.325 Atomic Write Unit (Normal): 1 00:22:26.325 Atomic Write Unit (PFail): 1 00:22:26.325 Atomic Compare & Write Unit: 1 00:22:26.325 Fused Compare & Write: Supported 00:22:26.325 Scatter-Gather List 00:22:26.325 SGL Command Set: Supported 00:22:26.325 SGL Keyed: Supported 00:22:26.325 SGL Bit Bucket Descriptor: Not Supported 00:22:26.325 SGL Metadata Pointer: Not Supported 00:22:26.325 Oversized SGL: Not Supported 00:22:26.325 SGL Metadata Address: Not Supported 00:22:26.325 SGL Offset: Supported 00:22:26.325 Transport SGL Data Block: Not Supported 00:22:26.325 Replay Protected Memory Block: Not Supported 00:22:26.325 00:22:26.325 Firmware Slot Information 00:22:26.325 ========================= 00:22:26.325 Active slot: 1 00:22:26.325 Slot 1 Firmware Revision: 25.01 00:22:26.325 00:22:26.325 00:22:26.325 Commands Supported and Effects 00:22:26.325 ============================== 00:22:26.325 Admin Commands 00:22:26.325 -------------- 00:22:26.325 Get Log Page (02h): Supported 00:22:26.325 Identify (06h): Supported 00:22:26.325 Abort (08h): Supported 00:22:26.325 Set Features (09h): Supported 00:22:26.325 Get Features (0Ah): Supported 00:22:26.325 Asynchronous Event Request (0Ch): Supported 00:22:26.325 Keep Alive (18h): Supported 00:22:26.325 I/O Commands 00:22:26.325 ------------ 00:22:26.325 Flush (00h): Supported LBA-Change 00:22:26.325 Write (01h): Supported LBA-Change 00:22:26.325 Read (02h): Supported 00:22:26.325 Compare (05h): Supported 00:22:26.325 Write Zeroes (08h): Supported LBA-Change 00:22:26.325 Dataset Management (09h): Supported LBA-Change 00:22:26.325 Copy (19h): Supported LBA-Change 00:22:26.325 00:22:26.325 Error Log 00:22:26.325 ========= 00:22:26.325 00:22:26.325 Arbitration 00:22:26.325 =========== 00:22:26.325 Arbitration Burst: 1 00:22:26.325 00:22:26.325 Power Management 00:22:26.325 ================ 00:22:26.325 Number of Power States: 1 00:22:26.325 Current Power State: Power State #0 00:22:26.325 Power State #0: 00:22:26.325 Max Power: 0.00 W 00:22:26.325 Non-Operational State: Operational 00:22:26.325 Entry Latency: Not Reported 00:22:26.325 Exit Latency: Not Reported 00:22:26.325 Relative Read Throughput: 0 00:22:26.325 Relative Read Latency: 0 00:22:26.325 Relative Write Throughput: 0 00:22:26.325 Relative Write Latency: 0 00:22:26.325 Idle Power: Not Reported 00:22:26.325 Active Power: Not Reported 00:22:26.325 Non-Operational Permissive Mode: Not Supported 00:22:26.325 00:22:26.325 Health Information 00:22:26.325 ================== 00:22:26.325 Critical Warnings: 00:22:26.325 Available Spare Space: OK 00:22:26.325 Temperature: OK 00:22:26.325 Device Reliability: OK 00:22:26.325 Read Only: No 00:22:26.325 Volatile Memory Backup: OK 00:22:26.325 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:26.325 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:26.325 Available Spare: 0% 00:22:26.325 Available Spare Threshold: 0% 00:22:26.325 Life Percentage Used:[2024-12-10 04:59:17.188981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.188986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb5a690) 00:22:26.325 [2024-12-10 04:59:17.188991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.325 [2024-12-10 04:59:17.189002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbcb80, cid 7, qid 0 00:22:26.325 [2024-12-10 04:59:17.189076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.325 [2024-12-10 04:59:17.189082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.325 [2024-12-10 04:59:17.189085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbcb80) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189119] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:26.325 [2024-12-10 04:59:17.189128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc100) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.325 [2024-12-10 04:59:17.189139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc280) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.325 [2024-12-10 04:59:17.189147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc400) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.325 [2024-12-10 04:59:17.189155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.325 [2024-12-10 04:59:17.189173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.325 [2024-12-10 04:59:17.189185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.325 [2024-12-10 04:59:17.189202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.325 [2024-12-10 04:59:17.189253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.325 [2024-12-10 04:59:17.189258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.325 [2024-12-10 04:59:17.189261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.325 [2024-12-10 04:59:17.189282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.325 [2024-12-10 04:59:17.189293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.325 [2024-12-10 04:59:17.189370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.325 [2024-12-10 04:59:17.189376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.325 [2024-12-10 04:59:17.189379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189386] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:26.325 [2024-12-10 04:59:17.189389] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:26.325 [2024-12-10 04:59:17.189398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.325 [2024-12-10 04:59:17.189410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.325 [2024-12-10 04:59:17.189419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.325 [2024-12-10 04:59:17.189477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.325 [2024-12-10 04:59:17.189482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.325 [2024-12-10 04:59:17.189485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.325 [2024-12-10 04:59:17.189498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.325 [2024-12-10 04:59:17.189505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.325 [2024-12-10 04:59:17.189510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.325 [2024-12-10 04:59:17.189519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.325 [2024-12-10 04:59:17.189578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.325 [2024-12-10 04:59:17.189584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.189587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.189599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.189611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.189620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.189680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.189685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.189688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.189699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.189711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.189720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.189789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.189795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.189798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.189809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.189821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.189830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.189896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.189901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.189904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.189917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.189924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.189929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.189938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.326 [2024-12-10 04:59:17.190783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.326 [2024-12-10 04:59:17.190792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.326 [2024-12-10 04:59:17.190855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.326 [2024-12-10 04:59:17.190860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.326 [2024-12-10 04:59:17.190863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.326 [2024-12-10 04:59:17.190866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.326 [2024-12-10 04:59:17.190875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.190878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.190881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.190886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.190895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.190962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.190967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.190970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.190974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.190982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.190985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.190988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.190994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.327 [2024-12-10 04:59:17.191813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.327 [2024-12-10 04:59:17.191873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.327 [2024-12-10 04:59:17.191878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.327 [2024-12-10 04:59:17.191881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.327 [2024-12-10 04:59:17.191892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.327 [2024-12-10 04:59:17.191899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.327 [2024-12-10 04:59:17.191904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.191913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.191974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.191979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.191982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.191985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.191993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.191997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.192897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.192902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.192905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.192919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.192925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.192931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.192940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.193003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.193009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.193011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.193015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.193023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.193026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.193029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.193035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.193044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.193110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.328 [2024-12-10 04:59:17.193115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.328 [2024-12-10 04:59:17.193118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.193121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.328 [2024-12-10 04:59:17.193130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.193133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.328 [2024-12-10 04:59:17.193136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.328 [2024-12-10 04:59:17.193142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.328 [2024-12-10 04:59:17.193151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.328 [2024-12-10 04:59:17.197178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.329 [2024-12-10 04:59:17.197186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.329 [2024-12-10 04:59:17.197189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.329 [2024-12-10 04:59:17.197192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.329 [2024-12-10 04:59:17.197201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:26.329 [2024-12-10 04:59:17.197205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:26.329 [2024-12-10 04:59:17.197208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5a690) 00:22:26.329 [2024-12-10 04:59:17.197214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.329 [2024-12-10 04:59:17.197225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbbc580, cid 3, qid 0 00:22:26.329 [2024-12-10 04:59:17.197296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:26.329 [2024-12-10 04:59:17.197302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:26.329 [2024-12-10 04:59:17.197305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:26.329 [2024-12-10 04:59:17.197308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbbc580) on tqpair=0xb5a690 00:22:26.329 [2024-12-10 04:59:17.197317] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:26.329 0% 00:22:26.329 Data Units Read: 0 00:22:26.329 Data Units Written: 0 00:22:26.329 Host Read Commands: 0 00:22:26.329 Host Write Commands: 0 00:22:26.329 Controller Busy Time: 0 minutes 00:22:26.329 Power Cycles: 0 00:22:26.329 Power On Hours: 0 hours 00:22:26.329 Unsafe Shutdowns: 0 00:22:26.329 Unrecoverable Media Errors: 0 00:22:26.329 Lifetime Error Log Entries: 0 00:22:26.329 Warning Temperature Time: 0 minutes 00:22:26.329 Critical Temperature Time: 0 minutes 00:22:26.329 00:22:26.329 Number of Queues 00:22:26.329 ================ 00:22:26.329 Number of I/O Submission Queues: 127 00:22:26.329 Number of I/O Completion Queues: 127 00:22:26.329 00:22:26.329 Active Namespaces 00:22:26.329 ================= 00:22:26.329 Namespace ID:1 00:22:26.329 Error Recovery Timeout: Unlimited 00:22:26.329 Command Set Identifier: NVM (00h) 00:22:26.329 Deallocate: Supported 00:22:26.329 Deallocated/Unwritten Error: Not Supported 00:22:26.329 Deallocated Read Value: Unknown 00:22:26.329 Deallocate in Write Zeroes: Not Supported 00:22:26.329 Deallocated Guard Field: 0xFFFF 00:22:26.329 Flush: Supported 00:22:26.329 Reservation: Supported 00:22:26.329 Namespace Sharing Capabilities: Multiple Controllers 00:22:26.329 Size (in LBAs): 131072 (0GiB) 00:22:26.329 Capacity (in LBAs): 131072 (0GiB) 00:22:26.329 Utilization (in LBAs): 131072 (0GiB) 00:22:26.329 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:26.329 EUI64: ABCDEF0123456789 00:22:26.329 UUID: 9c8e131e-dd6a-4e7a-aabc-e9cddb061128 00:22:26.329 Thin Provisioning: Not Supported 00:22:26.329 Per-NS Atomic Units: Yes 00:22:26.329 Atomic Boundary Size (Normal): 0 00:22:26.329 Atomic Boundary Size (PFail): 0 00:22:26.329 Atomic Boundary Offset: 0 00:22:26.329 Maximum Single Source Range Length: 65535 00:22:26.329 Maximum Copy Length: 65535 00:22:26.329 Maximum Source Range Count: 1 00:22:26.329 NGUID/EUI64 Never Reused: No 00:22:26.329 Namespace Write Protected: No 00:22:26.329 Number of LBA Formats: 1 00:22:26.329 Current LBA Format: LBA Format #00 00:22:26.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:26.329 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.329 rmmod nvme_tcp 00:22:26.329 rmmod nvme_fabrics 00:22:26.329 rmmod nvme_keyring 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 706520 ']' 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 706520 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 706520 ']' 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 706520 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 706520 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 706520' 00:22:26.329 killing process with pid 706520 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 706520 00:22:26.329 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 706520 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.588 04:59:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.492 04:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.492 00:22:28.492 real 0m9.943s 00:22:28.492 user 0m7.898s 00:22:28.492 sys 0m4.867s 00:22:28.492 04:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.492 04:59:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.492 ************************************ 00:22:28.492 END TEST nvmf_identify 00:22:28.492 ************************************ 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.751 ************************************ 00:22:28.751 START TEST nvmf_perf 00:22:28.751 ************************************ 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:28.751 * Looking for test storage... 00:22:28.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:28.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.751 --rc genhtml_branch_coverage=1 00:22:28.751 --rc genhtml_function_coverage=1 00:22:28.751 --rc genhtml_legend=1 00:22:28.751 --rc geninfo_all_blocks=1 00:22:28.751 --rc geninfo_unexecuted_blocks=1 00:22:28.751 00:22:28.751 ' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:28.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.751 --rc genhtml_branch_coverage=1 00:22:28.751 --rc genhtml_function_coverage=1 00:22:28.751 --rc genhtml_legend=1 00:22:28.751 --rc geninfo_all_blocks=1 00:22:28.751 --rc geninfo_unexecuted_blocks=1 00:22:28.751 00:22:28.751 ' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:28.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.751 --rc genhtml_branch_coverage=1 00:22:28.751 --rc genhtml_function_coverage=1 00:22:28.751 --rc genhtml_legend=1 00:22:28.751 --rc geninfo_all_blocks=1 00:22:28.751 --rc geninfo_unexecuted_blocks=1 00:22:28.751 00:22:28.751 ' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:28.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.751 --rc genhtml_branch_coverage=1 00:22:28.751 --rc genhtml_function_coverage=1 00:22:28.751 --rc genhtml_legend=1 00:22:28.751 --rc geninfo_all_blocks=1 00:22:28.751 --rc geninfo_unexecuted_blocks=1 00:22:28.751 00:22:28.751 ' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.751 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:29.010 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:29.011 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:29.011 04:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.408 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.408 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.408 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.408 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.408 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.409 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.668 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.668 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.668 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.668 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:22:34.928 00:22:34.928 --- 10.0.0.2 ping statistics --- 00:22:34.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.928 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:22:34.928 00:22:34.928 --- 10.0.0.1 ping statistics --- 00:22:34.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.928 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=710311 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 710311 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 710311 ']' 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.928 04:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:34.928 [2024-12-10 04:59:25.932423] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:34.928 [2024-12-10 04:59:25.932465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.928 [2024-12-10 04:59:26.011957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.928 [2024-12-10 04:59:26.051039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.928 [2024-12-10 04:59:26.051078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.928 [2024-12-10 04:59:26.051086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.928 [2024-12-10 04:59:26.051092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.928 [2024-12-10 04:59:26.051097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.928 [2024-12-10 04:59:26.052582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.928 [2024-12-10 04:59:26.052691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.928 [2024-12-10 04:59:26.052755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.928 [2024-12-10 04:59:26.052756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:35.187 04:59:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:38.474 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:38.474 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:38.474 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:38.474 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:38.733 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:38.733 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:38.733 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:38.733 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:38.733 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:38.733 [2024-12-10 04:59:29.820250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.733 04:59:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.992 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:38.992 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:39.251 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:39.251 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:39.510 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.510 [2024-12-10 04:59:30.628573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.769 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:39.769 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:39.769 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:39.769 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:39.769 04:59:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:41.145 Initializing NVMe Controllers 00:22:41.145 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:41.145 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:41.145 Initialization complete. Launching workers. 00:22:41.145 ======================================================== 00:22:41.145 Latency(us) 00:22:41.145 Device Information : IOPS MiB/s Average min max 00:22:41.145 PCIE (0000:5e:00.0) NSID 1 from core 0: 98674.20 385.45 323.76 14.32 5478.52 00:22:41.145 ======================================================== 00:22:41.145 Total : 98674.20 385.45 323.76 14.32 5478.52 00:22:41.145 00:22:41.145 04:59:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.520 Initializing NVMe Controllers 00:22:42.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:42.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:42.520 Initialization complete. Launching workers. 00:22:42.520 ======================================================== 00:22:42.520 Latency(us) 00:22:42.520 Device Information : IOPS MiB/s Average min max 00:22:42.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 11812.51 105.48 44759.40 00:22:42.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16634.70 7968.50 54865.61 00:22:42.520 ======================================================== 00:22:42.521 Total : 147.00 0.57 13813.55 105.48 54865.61 00:22:42.521 00:22:42.521 04:59:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:43.897 Initializing NVMe Controllers 00:22:43.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.897 Initialization complete. Launching workers. 00:22:43.897 ======================================================== 00:22:43.897 Latency(us) 00:22:43.897 Device Information : IOPS MiB/s Average min max 00:22:43.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11273.20 44.04 2837.85 451.32 7841.97 00:22:43.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3869.65 15.12 8276.54 4969.31 16130.40 00:22:43.898 ======================================================== 00:22:43.898 Total : 15142.85 59.15 4227.67 451.32 16130.40 00:22:43.898 00:22:43.898 04:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:43.898 04:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:43.898 04:59:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:46.430 Initializing NVMe Controllers 00:22:46.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.430 Controller IO queue size 128, less than required. 00:22:46.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.430 Controller IO queue size 128, less than required. 00:22:46.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.430 Initialization complete. Launching workers. 00:22:46.430 ======================================================== 00:22:46.430 Latency(us) 00:22:46.430 Device Information : IOPS MiB/s Average min max 00:22:46.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1808.96 452.24 71942.41 51875.49 125191.31 00:22:46.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.99 150.25 221831.88 85061.40 376314.93 00:22:46.430 ======================================================== 00:22:46.430 Total : 2409.95 602.49 109321.56 51875.49 376314.93 00:22:46.430 00:22:46.430 04:59:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:46.430 No valid NVMe controllers or AIO or URING devices found 00:22:46.430 Initializing NVMe Controllers 00:22:46.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.430 Controller IO queue size 128, less than required. 00:22:46.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.430 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:46.430 Controller IO queue size 128, less than required. 00:22:46.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.430 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:46.430 WARNING: Some requested NVMe devices were skipped 00:22:46.430 04:59:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:48.963 Initializing NVMe Controllers 00:22:48.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.963 Controller IO queue size 128, less than required. 00:22:48.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:48.963 Controller IO queue size 128, less than required. 00:22:48.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:48.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:48.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:48.963 Initialization complete. Launching workers. 00:22:48.963 00:22:48.963 ==================== 00:22:48.963 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:48.963 TCP transport: 00:22:48.963 polls: 10748 00:22:48.963 idle_polls: 7294 00:22:48.963 sock_completions: 3454 00:22:48.963 nvme_completions: 6481 00:22:48.963 submitted_requests: 9640 00:22:48.963 queued_requests: 1 00:22:48.963 00:22:48.963 ==================== 00:22:48.963 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:48.963 TCP transport: 00:22:48.963 polls: 14626 00:22:48.963 idle_polls: 11190 00:22:48.963 sock_completions: 3436 00:22:48.963 nvme_completions: 6717 00:22:48.963 submitted_requests: 10060 00:22:48.963 queued_requests: 1 00:22:48.963 ======================================================== 00:22:48.963 Latency(us) 00:22:48.963 Device Information : IOPS MiB/s Average min max 00:22:48.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.68 404.92 81562.21 57712.76 146359.86 00:22:48.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1678.67 419.67 76813.75 46653.11 109253.53 00:22:48.963 ======================================================== 00:22:48.963 Total : 3298.35 824.59 79145.52 46653.11 146359.86 00:22:48.963 00:22:48.963 04:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:48.963 04:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.223 rmmod nvme_tcp 00:22:49.223 rmmod nvme_fabrics 00:22:49.223 rmmod nvme_keyring 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 710311 ']' 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 710311 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 710311 ']' 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 710311 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 710311 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 710311' 00:22:49.223 killing process with pid 710311 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 710311 00:22:49.223 04:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 710311 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.127 04:59:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:53.032 00:22:53.032 real 0m24.146s 00:22:53.032 user 1m2.433s 00:22:53.032 sys 0m8.210s 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.032 ************************************ 00:22:53.032 END TEST nvmf_perf 00:22:53.032 ************************************ 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.032 ************************************ 00:22:53.032 START TEST nvmf_fio_host 00:22:53.032 ************************************ 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:53.032 * Looking for test storage... 00:22:53.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.032 04:59:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:53.032 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.033 --rc genhtml_branch_coverage=1 00:22:53.033 --rc genhtml_function_coverage=1 00:22:53.033 --rc genhtml_legend=1 00:22:53.033 --rc geninfo_all_blocks=1 00:22:53.033 --rc geninfo_unexecuted_blocks=1 00:22:53.033 00:22:53.033 ' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.033 --rc genhtml_branch_coverage=1 00:22:53.033 --rc genhtml_function_coverage=1 00:22:53.033 --rc genhtml_legend=1 00:22:53.033 --rc geninfo_all_blocks=1 00:22:53.033 --rc geninfo_unexecuted_blocks=1 00:22:53.033 00:22:53.033 ' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.033 --rc genhtml_branch_coverage=1 00:22:53.033 --rc genhtml_function_coverage=1 00:22:53.033 --rc genhtml_legend=1 00:22:53.033 --rc geninfo_all_blocks=1 00:22:53.033 --rc geninfo_unexecuted_blocks=1 00:22:53.033 00:22:53.033 ' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.033 --rc genhtml_branch_coverage=1 00:22:53.033 --rc genhtml_function_coverage=1 00:22:53.033 --rc genhtml_legend=1 00:22:53.033 --rc geninfo_all_blocks=1 00:22:53.033 --rc geninfo_unexecuted_blocks=1 00:22:53.033 00:22:53.033 ' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.033 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:53.034 04:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:59.605 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:59.605 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:59.605 Found net devices under 0000:af:00.0: cvl_0_0 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:59.605 Found net devices under 0000:af:00.1: cvl_0_1 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.605 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:22:59.606 00:22:59.606 --- 10.0.0.2 ping statistics --- 00:22:59.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.606 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:22:59.606 04:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:22:59.606 00:22:59.606 --- 10.0.0.1 ping statistics --- 00:22:59.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.606 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=716294 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 716294 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 716294 ']' 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.606 [2024-12-10 04:59:50.103054] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:59.606 [2024-12-10 04:59:50.103107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.606 [2024-12-10 04:59:50.184348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.606 [2024-12-10 04:59:50.223880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.606 [2024-12-10 04:59:50.223918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.606 [2024-12-10 04:59:50.223926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.606 [2024-12-10 04:59:50.223932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.606 [2024-12-10 04:59:50.223937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.606 [2024-12-10 04:59:50.225388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.606 [2024-12-10 04:59:50.225499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.606 [2024-12-10 04:59:50.225602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.606 [2024-12-10 04:59:50.225604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:59.606 [2024-12-10 04:59:50.503639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.606 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:59.866 Malloc1 00:22:59.866 04:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.124 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:00.124 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.383 [2024-12-10 04:59:51.377916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.383 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:00.642 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:00.643 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:00.643 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:00.643 04:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:00.902 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:00.902 fio-3.35 00:23:00.902 Starting 1 thread 00:23:03.434 00:23:03.434 test: (groupid=0, jobs=1): err= 0: pid=716669: Tue Dec 10 04:59:54 2024 00:23:03.434 read: IOPS=12.0k, BW=46.8MiB/s (49.0MB/s)(93.8MiB/2005msec) 00:23:03.434 slat (nsec): min=1531, max=237658, avg=1733.75, stdev=2169.29 00:23:03.434 clat (usec): min=3203, max=10540, avg=5900.50, stdev=456.75 00:23:03.434 lat (usec): min=3234, max=10542, avg=5902.23, stdev=456.69 00:23:03.434 clat percentiles (usec): 00:23:03.434 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:23:03.434 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:23:03.434 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:23:03.434 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 9372], 99.95th=[ 9896], 00:23:03.434 | 99.99th=[10552] 00:23:03.434 bw ( KiB/s): min=46992, max=48408, per=99.97%, avg=47884.00, stdev=645.33, samples=4 00:23:03.434 iops : min=11748, max=12102, avg=11971.00, stdev=161.33, samples=4 00:23:03.434 write: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(93.4MiB/2005msec); 0 zone resets 00:23:03.434 slat (nsec): min=1567, max=238136, avg=1799.15, stdev=1692.12 00:23:03.434 clat (usec): min=2457, max=9352, avg=4771.57, stdev=359.51 00:23:03.434 lat (usec): min=2472, max=9354, avg=4773.37, stdev=359.51 00:23:03.434 clat percentiles (usec): 00:23:03.434 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:03.434 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:23:03.434 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:23:03.434 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6915], 99.95th=[ 7767], 00:23:03.434 | 99.99th=[ 9241] 00:23:03.434 bw ( KiB/s): min=47296, max=48128, per=99.99%, avg=47700.00, stdev=353.78, samples=4 00:23:03.434 iops : min=11824, max=12032, avg=11925.00, stdev=88.45, samples=4 00:23:03.434 lat (msec) : 4=0.80%, 10=99.18%, 20=0.02% 00:23:03.434 cpu : usr=74.65%, sys=24.40%, ctx=99, majf=0, minf=2 00:23:03.434 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:03.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:03.434 issued rwts: total=24008,23911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:03.434 00:23:03.434 Run status group 0 (all jobs): 00:23:03.434 READ: bw=46.8MiB/s (49.0MB/s), 46.8MiB/s-46.8MiB/s (49.0MB/s-49.0MB/s), io=93.8MiB (98.3MB), run=2005-2005msec 00:23:03.434 WRITE: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=93.4MiB (97.9MB), run=2005-2005msec 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:03.434 04:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:03.434 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:03.434 fio-3.35 00:23:03.434 Starting 1 thread 00:23:05.967 00:23:05.967 test: (groupid=0, jobs=1): err= 0: pid=717225: Tue Dec 10 04:59:56 2024 00:23:05.967 read: IOPS=11.0k, BW=172MiB/s (180MB/s)(345MiB/2007msec) 00:23:05.967 slat (nsec): min=2504, max=89000, avg=2866.40, stdev=1242.74 00:23:05.967 clat (usec): min=1642, max=13012, avg=6706.97, stdev=1669.42 00:23:05.967 lat (usec): min=1645, max=13026, avg=6709.84, stdev=1669.55 00:23:05.967 clat percentiles (usec): 00:23:05.967 | 1.00th=[ 3425], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5211], 00:23:05.967 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:23:05.967 | 70.00th=[ 7504], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[ 9634], 00:23:05.967 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12518], 99.95th=[12649], 00:23:05.967 | 99.99th=[13042] 00:23:05.967 bw ( KiB/s): min=80288, max=95360, per=50.55%, avg=89016.00, stdev=6711.33, samples=4 00:23:05.967 iops : min= 5018, max= 5960, avg=5563.50, stdev=419.46, samples=4 00:23:05.967 write: IOPS=6480, BW=101MiB/s (106MB/s)(182MiB/1799msec); 0 zone resets 00:23:05.967 slat (usec): min=29, max=404, avg=31.92, stdev= 7.26 00:23:05.967 clat (usec): min=2951, max=15396, avg=8568.61, stdev=1472.17 00:23:05.967 lat (usec): min=2980, max=15507, avg=8600.53, stdev=1473.65 00:23:05.967 clat percentiles (usec): 00:23:05.967 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7373], 00:23:05.967 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:23:05.967 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:23:05.967 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14746], 99.95th=[15139], 00:23:05.967 | 99.99th=[15270] 00:23:05.967 bw ( KiB/s): min=84960, max=99200, per=89.47%, avg=92776.00, stdev=6322.32, samples=4 00:23:05.967 iops : min= 5310, max= 6200, avg=5798.50, stdev=395.15, samples=4 00:23:05.967 lat (msec) : 2=0.04%, 4=2.35%, 10=89.51%, 20=8.10% 00:23:05.967 cpu : usr=86.29%, sys=12.81%, ctx=62, majf=0, minf=2 00:23:05.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:05.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:05.967 issued rwts: total=22087,11659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:05.967 00:23:05.967 Run status group 0 (all jobs): 00:23:05.967 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=345MiB (362MB), run=2007-2007msec 00:23:05.967 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=182MiB (191MB), run=1799-1799msec 00:23:05.967 04:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.967 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.967 rmmod nvme_tcp 00:23:06.226 rmmod nvme_fabrics 00:23:06.226 rmmod nvme_keyring 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 716294 ']' 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 716294 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 716294 ']' 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 716294 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716294 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716294' 00:23:06.226 killing process with pid 716294 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 716294 00:23:06.226 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 716294 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.485 04:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.389 04:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:08.389 00:23:08.389 real 0m15.571s 00:23:08.389 user 0m46.350s 00:23:08.389 sys 0m6.350s 00:23:08.389 04:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.389 04:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.389 ************************************ 00:23:08.389 END TEST nvmf_fio_host 00:23:08.389 ************************************ 00:23:08.389 04:59:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:08.389 04:59:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:08.390 04:59:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.390 04:59:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.649 ************************************ 00:23:08.649 START TEST nvmf_failover 00:23:08.649 ************************************ 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:08.649 * Looking for test storage... 00:23:08.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.649 --rc genhtml_branch_coverage=1 00:23:08.649 --rc genhtml_function_coverage=1 00:23:08.649 --rc genhtml_legend=1 00:23:08.649 --rc geninfo_all_blocks=1 00:23:08.649 --rc geninfo_unexecuted_blocks=1 00:23:08.649 00:23:08.649 ' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.649 --rc genhtml_branch_coverage=1 00:23:08.649 --rc genhtml_function_coverage=1 00:23:08.649 --rc genhtml_legend=1 00:23:08.649 --rc geninfo_all_blocks=1 00:23:08.649 --rc geninfo_unexecuted_blocks=1 00:23:08.649 00:23:08.649 ' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.649 --rc genhtml_branch_coverage=1 00:23:08.649 --rc genhtml_function_coverage=1 00:23:08.649 --rc genhtml_legend=1 00:23:08.649 --rc geninfo_all_blocks=1 00:23:08.649 --rc geninfo_unexecuted_blocks=1 00:23:08.649 00:23:08.649 ' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.649 --rc genhtml_branch_coverage=1 00:23:08.649 --rc genhtml_function_coverage=1 00:23:08.649 --rc genhtml_legend=1 00:23:08.649 --rc geninfo_all_blocks=1 00:23:08.649 --rc geninfo_unexecuted_blocks=1 00:23:08.649 00:23:08.649 ' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.649 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:08.650 04:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.214 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:15.215 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:15.215 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:15.215 Found net devices under 0000:af:00.0: cvl_0_0 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:15.215 Found net devices under 0000:af:00.1: cvl_0_1 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:23:15.215 00:23:15.215 --- 10.0.0.2 ping statistics --- 00:23:15.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.215 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:23:15.215 00:23:15.215 --- 10.0.0.1 ping statistics --- 00:23:15.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.215 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.215 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=721253 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 721253 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 721253 ']' 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.216 [2024-12-10 05:00:05.711766] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:15.216 [2024-12-10 05:00:05.711818] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.216 [2024-12-10 05:00:05.772718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:15.216 [2024-12-10 05:00:05.815338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.216 [2024-12-10 05:00:05.815372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.216 [2024-12-10 05:00:05.815379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.216 [2024-12-10 05:00:05.815385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.216 [2024-12-10 05:00:05.815393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.216 [2024-12-10 05:00:05.816717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.216 [2024-12-10 05:00:05.816827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.216 [2024-12-10 05:00:05.816828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.216 05:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:15.216 [2024-12-10 05:00:06.122194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.216 05:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:15.475 Malloc0 00:23:15.475 05:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.733 05:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.733 05:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.992 [2024-12-10 05:00:06.990881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.992 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.250 [2024-12-10 05:00:07.199487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.250 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:16.509 [2024-12-10 05:00:07.404179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=721807 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 721807 /var/tmp/bdevperf.sock 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 721807 ']' 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.509 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.768 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.768 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:16.768 05:00:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:17.026 NVMe0n1 00:23:17.026 05:00:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:17.594 00:23:17.594 05:00:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=722125 00:23:17.594 05:00:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.594 05:00:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:18.533 05:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.533 [2024-12-10 05:00:09.657562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19003e0 is same with the state(6) to be set 00:23:18.533 [2024-12-10 05:00:09.657612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19003e0 is same with the state(6) to be set 00:23:18.533 [2024-12-10 05:00:09.657620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19003e0 is same with the state(6) to be set 00:23:18.533 [2024-12-10 05:00:09.657627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19003e0 is same with the state(6) to be set 00:23:18.792 05:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:22.079 05:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:22.079 00:23:22.079 05:00:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:22.079 [2024-12-10 05:00:13.169247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 [2024-12-10 05:00:13.169514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901170 is same with the state(6) to be set 00:23:22.079 05:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:25.367 05:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.367 [2024-12-10 05:00:16.384847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.367 05:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:26.303 05:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:26.561 [2024-12-10 05:00:17.616789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.561 [2024-12-10 05:00:17.616939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.616998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 [2024-12-10 05:00:17.617313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d2b0 is same with the state(6) to be set 00:23:26.562 05:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 722125 00:23:33.135 { 00:23:33.135 "results": [ 00:23:33.135 { 00:23:33.135 "job": "NVMe0n1", 00:23:33.135 "core_mask": "0x1", 00:23:33.135 "workload": "verify", 00:23:33.135 "status": "finished", 00:23:33.135 "verify_range": { 00:23:33.135 "start": 0, 00:23:33.135 "length": 16384 00:23:33.135 }, 00:23:33.135 "queue_depth": 128, 00:23:33.135 "io_size": 4096, 00:23:33.135 "runtime": 15.004976, 00:23:33.135 "iops": 11301.650865686157, 00:23:33.135 "mibps": 44.14707369408655, 00:23:33.135 "io_failed": 7573, 00:23:33.135 "io_timeout": 0, 00:23:33.135 "avg_latency_us": 10818.821662336293, 00:23:33.135 "min_latency_us": 421.30285714285714, 00:23:33.135 "max_latency_us": 21470.841904761906 00:23:33.135 } 00:23:33.135 ], 00:23:33.135 "core_count": 1 00:23:33.135 } 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 721807 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 721807 ']' 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 721807 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 721807 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 721807' 00:23:33.135 killing process with pid 721807 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 721807 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 721807 00:23:33.135 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.135 [2024-12-10 05:00:07.465618] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:33.135 [2024-12-10 05:00:07.465672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721807 ] 00:23:33.135 [2024-12-10 05:00:07.540632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.135 [2024-12-10 05:00:07.580633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.135 Running I/O for 15 seconds... 00:23:33.135 11369.00 IOPS, 44.41 MiB/s [2024-12-10T04:00:24.272Z] [2024-12-10 05:00:09.658110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.135 [2024-12-10 05:00:09.658145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.135 [2024-12-10 05:00:09.658161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.135 [2024-12-10 05:00:09.658175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.135 [2024-12-10 05:00:09.658184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.135 [2024-12-10 05:00:09.658191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.135 [2024-12-10 05:00:09.658200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.136 [2024-12-10 05:00:09.658792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.136 [2024-12-10 05:00:09.658807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.136 [2024-12-10 05:00:09.658815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.658992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.658998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.137 [2024-12-10 05:00:09.659412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.137 [2024-12-10 05:00:09.659419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.138 [2024-12-10 05:00:09.659986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.659994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.138 [2024-12-10 05:00:09.660001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.138 [2024-12-10 05:00:09.660009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.139 [2024-12-10 05:00:09.660017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.139 [2024-12-10 05:00:09.660032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.139 [2024-12-10 05:00:09.660046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.139 [2024-12-10 05:00:09.660071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.139 [2024-12-10 05:00:09.660076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:23:33.139 [2024-12-10 05:00:09.660084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660127] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:33.139 [2024-12-10 05:00:09.660148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.139 [2024-12-10 05:00:09.660156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.139 [2024-12-10 05:00:09.660174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.139 [2024-12-10 05:00:09.660188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.139 [2024-12-10 05:00:09.660201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:09.660207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:33.139 [2024-12-10 05:00:09.663010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:33.139 [2024-12-10 05:00:09.663038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb35570 (9): Bad file descriptor 00:23:33.139 [2024-12-10 05:00:09.724979] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:33.139 11082.00 IOPS, 43.29 MiB/s [2024-12-10T04:00:24.276Z] 11214.67 IOPS, 43.81 MiB/s [2024-12-10T04:00:24.276Z] 11308.75 IOPS, 44.17 MiB/s [2024-12-10T04:00:24.276Z] [2024-12-10 05:00:13.170950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.170984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.170998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.139 [2024-12-10 05:00:13.171401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.139 [2024-12-10 05:00:13.171407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.140 [2024-12-10 05:00:13.171568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.140 [2024-12-10 05:00:13.171919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.140 [2024-12-10 05:00:13.171927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.171934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.171944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.171952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.171962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.171977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.171983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.171992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.141 [2024-12-10 05:00:13.172539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.141 [2024-12-10 05:00:13.172565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48904 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48912 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48920 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48928 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48936 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48944 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48952 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48960 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48968 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48976 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48984 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48992 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49000 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49008 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49016 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49024 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49032 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.172982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.172987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49040 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.172993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.172999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.173005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.173010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49048 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.173016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.173022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.173027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.173032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49056 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.173038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.173044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.173051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.173058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49064 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.173065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.173071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.173076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.173081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49072 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.173087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.173093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.183862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.183875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49080 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.183883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.183890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.142 [2024-12-10 05:00:13.183895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.142 [2024-12-10 05:00:13.183901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48376 len:8 PRP1 0x0 PRP2 0x0 00:23:33.142 [2024-12-10 05:00:13.183910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.142 [2024-12-10 05:00:13.183951] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:33.143 [2024-12-10 05:00:13.183975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:13.183983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:13.183991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:13.183997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:13.184004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:13.184012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:13.184019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:13.184026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:13.184033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:33.143 [2024-12-10 05:00:13.184054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb35570 (9): Bad file descriptor 00:23:33.143 [2024-12-10 05:00:13.186908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:33.143 [2024-12-10 05:00:13.249109] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:33.143 11178.60 IOPS, 43.67 MiB/s [2024-12-10T04:00:24.280Z] 11205.17 IOPS, 43.77 MiB/s [2024-12-10T04:00:24.280Z] 11226.57 IOPS, 43.85 MiB/s [2024-12-10T04:00:24.280Z] 11248.50 IOPS, 43.94 MiB/s [2024-12-10T04:00:24.280Z] 11266.67 IOPS, 44.01 MiB/s [2024-12-10T04:00:24.280Z] [2024-12-10 05:00:17.615857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:17.615895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.615904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:17.615911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.615919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:17.615926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.615934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.143 [2024-12-10 05:00:17.615946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.615953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb35570 is same with the state(6) to be set 00:23:33.143 [2024-12-10 05:00:17.618608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.143 [2024-12-10 05:00:17.618870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.618987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.618995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.619003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.619011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.619019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.619026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.619033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.619040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.619048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.619055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.619063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.619070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.143 [2024-12-10 05:00:17.619078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.143 [2024-12-10 05:00:17.619084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.144 [2024-12-10 05:00:17.619626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.144 [2024-12-10 05:00:17.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.619986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.619994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.620000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.620015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.620030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.620044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.145 [2024-12-10 05:00:17.620059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.145 [2024-12-10 05:00:17.620113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.145 [2024-12-10 05:00:17.620137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.145 [2024-12-10 05:00:17.620160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.145 [2024-12-10 05:00:17.620190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.145 [2024-12-10 05:00:17.620213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.145 [2024-12-10 05:00:17.620236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.145 [2024-12-10 05:00:17.620241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:23:33.145 [2024-12-10 05:00:17.620247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.145 [2024-12-10 05:00:17.620253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.620594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.620600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.620607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.620611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.146 [2024-12-10 05:00:17.631519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.146 [2024-12-10 05:00:17.631525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:23:33.146 [2024-12-10 05:00:17.631531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.146 [2024-12-10 05:00:17.631538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.147 [2024-12-10 05:00:17.631544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.147 [2024-12-10 05:00:17.631549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:23:33.147 [2024-12-10 05:00:17.631556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.147 [2024-12-10 05:00:17.631562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.147 [2024-12-10 05:00:17.631567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.147 [2024-12-10 05:00:17.631572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:23:33.147 [2024-12-10 05:00:17.631579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.147 [2024-12-10 05:00:17.631622] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:33.147 [2024-12-10 05:00:17.631632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:33.147 [2024-12-10 05:00:17.631663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb35570 (9): Bad file descriptor 00:23:33.147 [2024-12-10 05:00:17.635032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:33.147 [2024-12-10 05:00:17.662068] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:33.147 11208.10 IOPS, 43.78 MiB/s [2024-12-10T04:00:24.284Z] 11244.27 IOPS, 43.92 MiB/s [2024-12-10T04:00:24.284Z] 11260.08 IOPS, 43.98 MiB/s [2024-12-10T04:00:24.284Z] 11275.92 IOPS, 44.05 MiB/s [2024-12-10T04:00:24.284Z] 11295.93 IOPS, 44.12 MiB/s 00:23:33.147 Latency(us) 00:23:33.147 [2024-12-10T04:00:24.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.147 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:33.147 Verification LBA range: start 0x0 length 0x4000 00:23:33.147 NVMe0n1 : 15.00 11301.65 44.15 504.70 0.00 10818.82 421.30 21470.84 00:23:33.147 [2024-12-10T04:00:24.284Z] =================================================================================================================== 00:23:33.147 [2024-12-10T04:00:24.284Z] Total : 11301.65 44.15 504.70 0.00 10818.82 421.30 21470.84 00:23:33.147 Received shutdown signal, test time was about 15.000000 seconds 00:23:33.147 00:23:33.147 Latency(us) 00:23:33.147 [2024-12-10T04:00:24.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.147 [2024-12-10T04:00:24.284Z] =================================================================================================================== 00:23:33.147 [2024-12-10T04:00:24.284Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=724576 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 724576 /var/tmp/bdevperf.sock 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 724576 ']' 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.147 05:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.147 05:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.147 05:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:33.147 05:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.406 [2024-12-10 05:00:24.277300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.406 05:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:33.406 [2024-12-10 05:00:24.489917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:33.406 05:00:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.973 NVMe0n1 00:23:33.973 05:00:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:34.539 00:23:34.539 05:00:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:34.906 00:23:34.906 05:00:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.906 05:00:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:34.906 05:00:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.214 05:00:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:38.501 05:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.501 05:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:38.501 05:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.501 05:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=725479 00:23:38.501 05:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 725479 00:23:39.437 { 00:23:39.437 "results": [ 00:23:39.437 { 00:23:39.437 "job": "NVMe0n1", 00:23:39.437 "core_mask": "0x1", 00:23:39.437 "workload": "verify", 00:23:39.437 "status": "finished", 00:23:39.437 "verify_range": { 00:23:39.437 "start": 0, 00:23:39.437 "length": 16384 00:23:39.437 }, 00:23:39.437 "queue_depth": 128, 00:23:39.437 "io_size": 4096, 00:23:39.437 "runtime": 1.006387, 00:23:39.437 "iops": 11277.967620805912, 00:23:39.437 "mibps": 44.05456101877309, 00:23:39.437 "io_failed": 0, 00:23:39.437 "io_timeout": 0, 00:23:39.437 "avg_latency_us": 11304.533129263687, 00:23:39.437 "min_latency_us": 1084.4647619047619, 00:23:39.437 "max_latency_us": 12857.539047619048 00:23:39.437 } 00:23:39.437 ], 00:23:39.437 "core_count": 1 00:23:39.437 } 00:23:39.437 05:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:39.437 [2024-12-10 05:00:23.900862] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:39.437 [2024-12-10 05:00:23.900910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724576 ] 00:23:39.437 [2024-12-10 05:00:23.974980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.437 [2024-12-10 05:00:24.011455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.437 [2024-12-10 05:00:26.090629] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:39.437 [2024-12-10 05:00:26.090672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.437 [2024-12-10 05:00:26.090683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.437 [2024-12-10 05:00:26.090692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.437 [2024-12-10 05:00:26.090699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.437 [2024-12-10 05:00:26.090706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.437 [2024-12-10 05:00:26.090713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.437 [2024-12-10 05:00:26.090719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.437 [2024-12-10 05:00:26.090726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.437 [2024-12-10 05:00:26.090732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:39.437 [2024-12-10 05:00:26.090756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:39.437 [2024-12-10 05:00:26.090770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb69570 (9): Bad file descriptor 00:23:39.437 [2024-12-10 05:00:26.184325] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:39.437 Running I/O for 1 seconds... 00:23:39.437 11222.00 IOPS, 43.84 MiB/s 00:23:39.437 Latency(us) 00:23:39.437 [2024-12-10T04:00:30.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.437 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:39.437 Verification LBA range: start 0x0 length 0x4000 00:23:39.437 NVMe0n1 : 1.01 11277.97 44.05 0.00 0.00 11304.53 1084.46 12857.54 00:23:39.437 [2024-12-10T04:00:30.574Z] =================================================================================================================== 00:23:39.437 [2024-12-10T04:00:30.574Z] Total : 11277.97 44.05 0.00 0.00 11304.53 1084.46 12857.54 00:23:39.437 05:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.437 05:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:39.696 05:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:39.954 05:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.954 05:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:39.954 05:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.213 05:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 724576 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 724576 ']' 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 724576 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 724576 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 724576' 00:23:43.499 killing process with pid 724576 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 724576 00:23:43.499 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 724576 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.758 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.758 rmmod nvme_tcp 00:23:43.758 rmmod nvme_fabrics 00:23:43.758 rmmod nvme_keyring 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 721253 ']' 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 721253 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 721253 ']' 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 721253 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 721253 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 721253' 00:23:44.017 killing process with pid 721253 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 721253 00:23:44.017 05:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 721253 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.017 05:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.550 00:23:46.550 real 0m37.668s 00:23:46.550 user 1m59.808s 00:23:46.550 sys 0m7.894s 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.550 ************************************ 00:23:46.550 END TEST nvmf_failover 00:23:46.550 ************************************ 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.550 ************************************ 00:23:46.550 START TEST nvmf_host_discovery 00:23:46.550 ************************************ 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:46.550 * Looking for test storage... 00:23:46.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:46.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.550 --rc genhtml_branch_coverage=1 00:23:46.550 --rc genhtml_function_coverage=1 00:23:46.550 --rc genhtml_legend=1 00:23:46.550 --rc geninfo_all_blocks=1 00:23:46.550 --rc geninfo_unexecuted_blocks=1 00:23:46.550 00:23:46.550 ' 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:46.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.550 --rc genhtml_branch_coverage=1 00:23:46.550 --rc genhtml_function_coverage=1 00:23:46.550 --rc genhtml_legend=1 00:23:46.550 --rc geninfo_all_blocks=1 00:23:46.550 --rc geninfo_unexecuted_blocks=1 00:23:46.550 00:23:46.550 ' 00:23:46.550 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:46.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.550 --rc genhtml_branch_coverage=1 00:23:46.550 --rc genhtml_function_coverage=1 00:23:46.550 --rc genhtml_legend=1 00:23:46.550 --rc geninfo_all_blocks=1 00:23:46.551 --rc geninfo_unexecuted_blocks=1 00:23:46.551 00:23:46.551 ' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:46.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.551 --rc genhtml_branch_coverage=1 00:23:46.551 --rc genhtml_function_coverage=1 00:23:46.551 --rc genhtml_legend=1 00:23:46.551 --rc geninfo_all_blocks=1 00:23:46.551 --rc geninfo_unexecuted_blocks=1 00:23:46.551 00:23:46.551 ' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.551 05:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.122 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.122 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.123 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:23:53.123 00:23:53.123 --- 10.0.0.2 ping statistics --- 00:23:53.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.123 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:23:53.123 00:23:53.123 --- 10.0.0.1 ping statistics --- 00:23:53.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.123 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=729843 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 729843 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 729843 ']' 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 [2024-12-10 05:00:43.469427] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:53.123 [2024-12-10 05:00:43.469469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.123 [2024-12-10 05:00:43.533062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.123 [2024-12-10 05:00:43.572595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.123 [2024-12-10 05:00:43.572629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.123 [2024-12-10 05:00:43.572637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.123 [2024-12-10 05:00:43.572644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.123 [2024-12-10 05:00:43.572649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.123 [2024-12-10 05:00:43.573130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 [2024-12-10 05:00:43.717023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 [2024-12-10 05:00:43.729202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 null0 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 null1 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=729866 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:53.123 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 729866 /tmp/host.sock 00:23:53.124 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 729866 ']' 00:23:53.124 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:53.124 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.124 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:53.124 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:53.124 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.124 05:00:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 [2024-12-10 05:00:43.805440] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:53.124 [2024-12-10 05:00:43.805482] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729866 ] 00:23:53.124 [2024-12-10 05:00:43.876629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.124 [2024-12-10 05:00:43.915468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.124 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 [2024-12-10 05:00:44.338746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.384 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.644 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:53.644 05:00:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:54.212 [2024-12-10 05:00:45.044385] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:54.212 [2024-12-10 05:00:45.044403] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:54.212 [2024-12-10 05:00:45.044414] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.212 [2024-12-10 05:00:45.130663] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:54.212 [2024-12-10 05:00:45.185179] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:54.213 [2024-12-10 05:00:45.185896] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c89ee0:1 started. 00:23:54.213 [2024-12-10 05:00:45.187260] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:54.213 [2024-12-10 05:00:45.187276] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:54.213 [2024-12-10 05:00:45.193689] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c89ee0 was disconnected and freed. delete nvme_qpair. 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.473 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:54.732 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 [2024-12-10 05:00:45.748117] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c8a0c0:1 started. 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.733 [2024-12-10 05:00:45.754915] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c8a0c0 was disconnected and freed. delete nvme_qpair. 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 [2024-12-10 05:00:45.850825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.733 [2024-12-10 05:00:45.851526] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.733 [2024-12-10 05:00:45.851546] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.733 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.992 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.992 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.992 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.992 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.993 [2024-12-10 05:00:45.937782] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.993 05:00:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:55.252 [2024-12-10 05:00:46.204977] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:55.252 [2024-12-10 05:00:46.205010] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:55.252 [2024-12-10 05:00:46.205017] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:55.252 [2024-12-10 05:00:46.205022] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.190 05:00:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.190 05:00:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:56.190 05:00:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.190 [2024-12-10 05:00:47.102390] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:56.190 [2024-12-10 05:00:47.102410] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:56.190 [2024-12-10 05:00:47.109997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.190 [2024-12-10 05:00:47.110015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.190 [2024-12-10 05:00:47.110023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.190 [2024-12-10 05:00:47.110030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.190 [2024-12-10 05:00:47.110041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.190 [2024-12-10 05:00:47.110048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.190 [2024-12-10 05:00:47.110055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.190 [2024-12-10 05:00:47.110061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.190 [2024-12-10 05:00:47.110067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.190 [2024-12-10 05:00:47.120012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.190 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.190 [2024-12-10 05:00:47.130046] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.190 [2024-12-10 05:00:47.130056] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.190 [2024-12-10 05:00:47.130063] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.190 [2024-12-10 05:00:47.130068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.190 [2024-12-10 05:00:47.130085] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.190 [2024-12-10 05:00:47.130274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-12-10 05:00:47.130289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5a3d0 with addr=10.0.0.2, port=4420 00:23:56.190 [2024-12-10 05:00:47.130297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.190 [2024-12-10 05:00:47.130309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.190 [2024-12-10 05:00:47.130319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.190 [2024-12-10 05:00:47.130326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.190 [2024-12-10 05:00:47.130334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.190 [2024-12-10 05:00:47.130341] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.190 [2024-12-10 05:00:47.130346] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.191 [2024-12-10 05:00:47.130350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.191 [2024-12-10 05:00:47.140115] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.191 [2024-12-10 05:00:47.140129] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.191 [2024-12-10 05:00:47.140133] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.140137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.191 [2024-12-10 05:00:47.140151] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.140404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-12-10 05:00:47.140418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5a3d0 with addr=10.0.0.2, port=4420 00:23:56.191 [2024-12-10 05:00:47.140426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.191 [2024-12-10 05:00:47.140436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.191 [2024-12-10 05:00:47.140446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.191 [2024-12-10 05:00:47.140452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.191 [2024-12-10 05:00:47.140459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.191 [2024-12-10 05:00:47.140465] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.191 [2024-12-10 05:00:47.140469] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.191 [2024-12-10 05:00:47.140473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:56.191 [2024-12-10 05:00:47.150182] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.191 [2024-12-10 05:00:47.150198] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.191 [2024-12-10 05:00:47.150202] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.150206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.191 [2024-12-10 05:00:47.150223] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.150474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-12-10 05:00:47.150488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5a3d0 with addr=10.0.0.2, port=4420 00:23:56.191 [2024-12-10 05:00:47.150496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.191 [2024-12-10 05:00:47.150507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.191 [2024-12-10 05:00:47.150523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.191 [2024-12-10 05:00:47.150529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.191 [2024-12-10 05:00:47.150536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.191 [2024-12-10 05:00:47.150541] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.191 [2024-12-10 05:00:47.150546] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.191 [2024-12-10 05:00:47.150549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.191 [2024-12-10 05:00:47.160253] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.191 [2024-12-10 05:00:47.160266] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.191 [2024-12-10 05:00:47.160270] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.160274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.191 [2024-12-10 05:00:47.160289] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.160396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-12-10 05:00:47.160411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5a3d0 with addr=10.0.0.2, port=4420 00:23:56.191 [2024-12-10 05:00:47.160419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.191 [2024-12-10 05:00:47.160430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.191 [2024-12-10 05:00:47.160440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.191 [2024-12-10 05:00:47.160446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.191 [2024-12-10 05:00:47.160453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.191 [2024-12-10 05:00:47.160459] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.191 [2024-12-10 05:00:47.160463] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.191 [2024-12-10 05:00:47.160467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.191 [2024-12-10 05:00:47.170319] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.191 [2024-12-10 05:00:47.170330] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.191 [2024-12-10 05:00:47.170335] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.170342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.191 [2024-12-10 05:00:47.170356] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.170468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-12-10 05:00:47.170480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5a3d0 with addr=10.0.0.2, port=4420 00:23:56.191 [2024-12-10 05:00:47.170488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.191 [2024-12-10 05:00:47.170498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.191 [2024-12-10 05:00:47.170507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.191 [2024-12-10 05:00:47.170513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.191 [2024-12-10 05:00:47.170520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.191 [2024-12-10 05:00:47.170525] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.191 [2024-12-10 05:00:47.170530] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.191 [2024-12-10 05:00:47.170534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.191 [2024-12-10 05:00:47.180387] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.191 [2024-12-10 05:00:47.180396] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.191 [2024-12-10 05:00:47.180400] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.180404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.191 [2024-12-10 05:00:47.180418] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.191 [2024-12-10 05:00:47.180589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-12-10 05:00:47.180601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5a3d0 with addr=10.0.0.2, port=4420 00:23:56.191 [2024-12-10 05:00:47.180608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3d0 is same with the state(6) to be set 00:23:56.191 [2024-12-10 05:00:47.180618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5a3d0 (9): Bad file descriptor 00:23:56.191 [2024-12-10 05:00:47.180628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.191 [2024-12-10 05:00:47.180635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.191 [2024-12-10 05:00:47.180642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.191 [2024-12-10 05:00:47.180647] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.191 [2024-12-10 05:00:47.180652] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.191 [2024-12-10 05:00:47.180655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.191 [2024-12-10 05:00:47.189240] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:56.191 [2024-12-10 05:00:47.189255] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.191 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.192 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.451 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.452 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.452 05:00:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.389 [2024-12-10 05:00:48.473270] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:57.389 [2024-12-10 05:00:48.473286] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:57.389 [2024-12-10 05:00:48.473296] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:57.648 [2024-12-10 05:00:48.560550] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:57.648 [2024-12-10 05:00:48.619034] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:57.649 [2024-12-10 05:00:48.619583] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1c717a0:1 started. 00:23:57.649 [2024-12-10 05:00:48.621110] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:57.649 [2024-12-10 05:00:48.621133] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.649 [2024-12-10 05:00:48.622380] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1c717a0 was disconnected and freed. delete nvme_qpair. 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.649 request: 00:23:57.649 { 00:23:57.649 "name": "nvme", 00:23:57.649 "trtype": "tcp", 00:23:57.649 "traddr": "10.0.0.2", 00:23:57.649 "adrfam": "ipv4", 00:23:57.649 "trsvcid": "8009", 00:23:57.649 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:57.649 "wait_for_attach": true, 00:23:57.649 "method": "bdev_nvme_start_discovery", 00:23:57.649 "req_id": 1 00:23:57.649 } 00:23:57.649 Got JSON-RPC error response 00:23:57.649 response: 00:23:57.649 { 00:23:57.649 "code": -17, 00:23:57.649 "message": "File exists" 00:23:57.649 } 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.649 request: 00:23:57.649 { 00:23:57.649 "name": "nvme_second", 00:23:57.649 "trtype": "tcp", 00:23:57.649 "traddr": "10.0.0.2", 00:23:57.649 "adrfam": "ipv4", 00:23:57.649 "trsvcid": "8009", 00:23:57.649 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:57.649 "wait_for_attach": true, 00:23:57.649 "method": "bdev_nvme_start_discovery", 00:23:57.649 "req_id": 1 00:23:57.649 } 00:23:57.649 Got JSON-RPC error response 00:23:57.649 response: 00:23:57.649 { 00:23:57.649 "code": -17, 00:23:57.649 "message": "File exists" 00:23:57.649 } 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.649 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.908 05:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.844 [2024-12-10 05:00:49.864520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.844 [2024-12-10 05:00:49.864547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c56430 with addr=10.0.0.2, port=8010 00:23:58.844 [2024-12-10 05:00:49.864558] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:58.844 [2024-12-10 05:00:49.864565] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:58.844 [2024-12-10 05:00:49.864570] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:59.781 [2024-12-10 05:00:50.867008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.781 [2024-12-10 05:00:50.867035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c56430 with addr=10.0.0.2, port=8010 00:23:59.781 [2024-12-10 05:00:50.867051] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:59.781 [2024-12-10 05:00:50.867057] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:59.781 [2024-12-10 05:00:50.867063] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:01.158 [2024-12-10 05:00:51.869152] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:01.158 request: 00:24:01.158 { 00:24:01.158 "name": "nvme_second", 00:24:01.158 "trtype": "tcp", 00:24:01.158 "traddr": "10.0.0.2", 00:24:01.158 "adrfam": "ipv4", 00:24:01.158 "trsvcid": "8010", 00:24:01.158 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:01.158 "wait_for_attach": false, 00:24:01.158 "attach_timeout_ms": 3000, 00:24:01.158 "method": "bdev_nvme_start_discovery", 00:24:01.158 "req_id": 1 00:24:01.158 } 00:24:01.158 Got JSON-RPC error response 00:24:01.158 response: 00:24:01.158 { 00:24:01.158 "code": -110, 00:24:01.158 "message": "Connection timed out" 00:24:01.158 } 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 729866 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.159 rmmod nvme_tcp 00:24:01.159 rmmod nvme_fabrics 00:24:01.159 rmmod nvme_keyring 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 729843 ']' 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 729843 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 729843 ']' 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 729843 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.159 05:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729843 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729843' 00:24:01.159 killing process with pid 729843 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 729843 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 729843 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.159 05:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.697 00:24:03.697 real 0m16.979s 00:24:03.697 user 0m20.056s 00:24:03.697 sys 0m5.843s 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.697 ************************************ 00:24:03.697 END TEST nvmf_host_discovery 00:24:03.697 ************************************ 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.697 ************************************ 00:24:03.697 START TEST nvmf_host_multipath_status 00:24:03.697 ************************************ 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:03.697 * Looking for test storage... 00:24:03.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:03.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.697 --rc genhtml_branch_coverage=1 00:24:03.697 --rc genhtml_function_coverage=1 00:24:03.697 --rc genhtml_legend=1 00:24:03.697 --rc geninfo_all_blocks=1 00:24:03.697 --rc geninfo_unexecuted_blocks=1 00:24:03.697 00:24:03.697 ' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:03.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.697 --rc genhtml_branch_coverage=1 00:24:03.697 --rc genhtml_function_coverage=1 00:24:03.697 --rc genhtml_legend=1 00:24:03.697 --rc geninfo_all_blocks=1 00:24:03.697 --rc geninfo_unexecuted_blocks=1 00:24:03.697 00:24:03.697 ' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:03.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.697 --rc genhtml_branch_coverage=1 00:24:03.697 --rc genhtml_function_coverage=1 00:24:03.697 --rc genhtml_legend=1 00:24:03.697 --rc geninfo_all_blocks=1 00:24:03.697 --rc geninfo_unexecuted_blocks=1 00:24:03.697 00:24:03.697 ' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:03.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.697 --rc genhtml_branch_coverage=1 00:24:03.697 --rc genhtml_function_coverage=1 00:24:03.697 --rc genhtml_legend=1 00:24:03.697 --rc geninfo_all_blocks=1 00:24:03.697 --rc geninfo_unexecuted_blocks=1 00:24:03.697 00:24:03.697 ' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.697 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.698 05:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.269 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.270 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.270 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:24:10.270 00:24:10.270 --- 10.0.0.2 ping statistics --- 00:24:10.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.270 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:10.270 00:24:10.270 --- 10.0.0.1 ping statistics --- 00:24:10.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.270 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=734841 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 734841 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 734841 ']' 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.270 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.270 [2024-12-10 05:01:00.483402] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:24:10.270 [2024-12-10 05:01:00.483452] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.270 [2024-12-10 05:01:00.564111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.270 [2024-12-10 05:01:00.602912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.270 [2024-12-10 05:01:00.602947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.270 [2024-12-10 05:01:00.602955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.270 [2024-12-10 05:01:00.602962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.270 [2024-12-10 05:01:00.602967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.271 [2024-12-10 05:01:00.604019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.271 [2024-12-10 05:01:00.604019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=734841 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.271 [2024-12-10 05:01:00.920847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.271 05:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.271 Malloc0 00:24:10.271 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:10.271 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.530 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.789 [2024-12-10 05:01:01.746262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.789 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.049 [2024-12-10 05:01:01.938706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=735096 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 735096 /var/tmp/bdevperf.sock 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 735096 ']' 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.049 05:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.309 05:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.309 05:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:11.309 05:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:11.309 05:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:11.877 Nvme0n1 00:24:11.877 05:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:12.136 Nvme0n1 00:24:12.136 05:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:12.136 05:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:14.672 05:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:14.672 05:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:14.672 05:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.672 05:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:15.609 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:15.609 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.609 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.609 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:15.868 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.868 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:15.868 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:15.868 05:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.127 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.127 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.127 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.127 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.387 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.646 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.646 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.646 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.646 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.905 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.905 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:16.905 05:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.164 05:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.423 05:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:18.360 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:18.360 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:18.360 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.360 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.619 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.619 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.619 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.619 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.619 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.620 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.620 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.620 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.879 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.879 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.879 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.879 05:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.137 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.137 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.137 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.137 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.396 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.396 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.396 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.396 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.656 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.656 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:19.656 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.656 05:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:19.916 05:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.294 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.553 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.812 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.812 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.812 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.812 05:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.071 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.071 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.071 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.071 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.331 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.331 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:22.331 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.590 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:22.590 05:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.968 05:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.225 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.483 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.483 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.483 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.483 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.742 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.742 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:24.742 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.742 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.000 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.000 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:25.000 05:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:25.260 05:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:25.260 05:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.637 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.638 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.638 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.638 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.896 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.896 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.896 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.896 05:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.155 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.155 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:27.155 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.155 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.414 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.414 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:27.414 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.414 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.732 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.732 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:27.732 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:27.732 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.011 05:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:28.981 05:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:28.981 05:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:28.981 05:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.981 05:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.240 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.240 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:29.240 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.240 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.499 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.757 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.758 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:29.758 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.758 05:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.016 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.016 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.016 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.016 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.274 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.274 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:30.533 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:30.533 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:30.533 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.792 05:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:32.169 05:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:32.169 05:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.169 05:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.169 05:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.169 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.428 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.428 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.428 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.428 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.428 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.686 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.686 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.686 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.686 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.945 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.945 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.945 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.945 05:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.204 05:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.204 05:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:33.204 05:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.204 05:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:33.463 05:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:34.840 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:34.840 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.841 05:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.099 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.099 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.099 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.099 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.359 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.359 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.359 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.359 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.618 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.618 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.618 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.618 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.877 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.877 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:35.877 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.877 05:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:36.135 05:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:37.072 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:37.072 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:37.072 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.072 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.331 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.331 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.331 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.331 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.590 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.590 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.590 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.590 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.849 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.849 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.849 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.849 05:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.108 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.108 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.108 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.108 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:38.367 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:38.626 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:38.885 05:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:39.821 05:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:39.821 05:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:39.821 05:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.821 05:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.078 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.078 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.078 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.078 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.337 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.337 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.337 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.337 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.595 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.595 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.595 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.595 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.853 05:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 735096 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 735096 ']' 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 735096 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 735096 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 735096' 00:24:41.111 killing process with pid 735096 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 735096 00:24:41.111 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 735096 00:24:41.111 { 00:24:41.111 "results": [ 00:24:41.111 { 00:24:41.111 "job": "Nvme0n1", 00:24:41.111 "core_mask": "0x4", 00:24:41.111 "workload": "verify", 00:24:41.111 "status": "terminated", 00:24:41.111 "verify_range": { 00:24:41.111 "start": 0, 00:24:41.111 "length": 16384 00:24:41.111 }, 00:24:41.111 "queue_depth": 128, 00:24:41.111 "io_size": 4096, 00:24:41.111 "runtime": 28.897008, 00:24:41.111 "iops": 10726.335404689647, 00:24:41.111 "mibps": 41.899747674568935, 00:24:41.111 "io_failed": 0, 00:24:41.111 "io_timeout": 0, 00:24:41.111 "avg_latency_us": 11913.282928116914, 00:24:41.111 "min_latency_us": 616.3504761904762, 00:24:41.111 "max_latency_us": 3019898.88 00:24:41.111 } 00:24:41.111 ], 00:24:41.111 "core_count": 1 00:24:41.111 } 00:24:41.373 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 735096 00:24:41.373 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.373 [2024-12-10 05:01:02.001179] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:24:41.373 [2024-12-10 05:01:02.001233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735096 ] 00:24:41.373 [2024-12-10 05:01:02.075080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.373 [2024-12-10 05:01:02.116028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.373 Running I/O for 90 seconds... 00:24:41.373 11556.00 IOPS, 45.14 MiB/s [2024-12-10T04:01:32.510Z] 11544.00 IOPS, 45.09 MiB/s [2024-12-10T04:01:32.510Z] 11561.33 IOPS, 45.16 MiB/s [2024-12-10T04:01:32.510Z] 11571.25 IOPS, 45.20 MiB/s [2024-12-10T04:01:32.510Z] 11581.20 IOPS, 45.24 MiB/s [2024-12-10T04:01:32.510Z] 11620.00 IOPS, 45.39 MiB/s [2024-12-10T04:01:32.510Z] 11604.00 IOPS, 45.33 MiB/s [2024-12-10T04:01:32.510Z] 11598.12 IOPS, 45.31 MiB/s [2024-12-10T04:01:32.510Z] 11583.22 IOPS, 45.25 MiB/s [2024-12-10T04:01:32.510Z] 11599.00 IOPS, 45.31 MiB/s [2024-12-10T04:01:32.510Z] 11613.73 IOPS, 45.37 MiB/s [2024-12-10T04:01:32.510Z] 11614.42 IOPS, 45.37 MiB/s [2024-12-10T04:01:32.510Z] [2024-12-10 05:01:16.124409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.373 [2024-12-10 05:01:16.124709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.373 [2024-12-10 05:01:16.124716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.124975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.124982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.374 [2024-12-10 05:01:16.125971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.374 [2024-12-10 05:01:16.125978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.375 [2024-12-10 05:01:16.126980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.375 [2024-12-10 05:01:16.126987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.376 [2024-12-10 05:01:16.127756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.376 [2024-12-10 05:01:16.127984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.376 [2024-12-10 05:01:16.127991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.376 11451.85 IOPS, 44.73 MiB/s [2024-12-10T04:01:32.513Z] 10633.86 IOPS, 41.54 MiB/s [2024-12-10T04:01:32.513Z] 9924.93 IOPS, 38.77 MiB/s [2024-12-10T04:01:32.513Z] 9433.75 IOPS, 36.85 MiB/s [2024-12-10T04:01:32.513Z] 9556.35 IOPS, 37.33 MiB/s [2024-12-10T04:01:32.513Z] 9671.33 IOPS, 37.78 MiB/s [2024-12-10T04:01:32.513Z] 9839.53 IOPS, 38.44 MiB/s [2024-12-10T04:01:32.513Z] 10014.40 IOPS, 39.12 MiB/s [2024-12-10T04:01:32.513Z] 10199.38 IOPS, 39.84 MiB/s [2024-12-10T04:01:32.513Z] 10257.59 IOPS, 40.07 MiB/s [2024-12-10T04:01:32.513Z] 10315.00 IOPS, 40.29 MiB/s [2024-12-10T04:01:32.513Z] 10365.88 IOPS, 40.49 MiB/s [2024-12-10T04:01:32.513Z] 10497.60 IOPS, 41.01 MiB/s [2024-12-10T04:01:32.513Z] 10614.12 IOPS, 41.46 MiB/s [2024-12-10T04:01:32.514Z] [2024-12-10 05:01:29.873772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.873810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.873852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.873872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.873897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.873917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.873946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.873965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.873987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.873999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.874007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.874019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.874028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.874047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.876988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.876996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.877008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.877016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.877029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.877036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.877049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.377 [2024-12-10 05:01:29.877055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.877069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.877078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.877091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.877099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.377 [2024-12-10 05:01:29.877112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.377 [2024-12-10 05:01:29.877119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.378 [2024-12-10 05:01:29.877132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.378 [2024-12-10 05:01:29.877140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.378 [2024-12-10 05:01:29.877152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.378 [2024-12-10 05:01:29.877160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.378 [2024-12-10 05:01:29.877179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.378 [2024-12-10 05:01:29.877186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.378 10677.74 IOPS, 41.71 MiB/s [2024-12-10T04:01:32.515Z] 10706.29 IOPS, 41.82 MiB/s [2024-12-10T04:01:32.515Z] Received shutdown signal, test time was about 28.897624 seconds 00:24:41.378 00:24:41.378 Latency(us) 00:24:41.378 [2024-12-10T04:01:32.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.378 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:41.378 Verification LBA range: start 0x0 length 0x4000 00:24:41.378 Nvme0n1 : 28.90 10726.34 41.90 0.00 0.00 11913.28 616.35 3019898.88 00:24:41.378 [2024-12-10T04:01:32.515Z] =================================================================================================================== 00:24:41.378 [2024-12-10T04:01:32.515Z] Total : 10726.34 41.90 0.00 0.00 11913.28 616.35 3019898.88 00:24:41.378 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.637 rmmod nvme_tcp 00:24:41.637 rmmod nvme_fabrics 00:24:41.637 rmmod nvme_keyring 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 734841 ']' 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 734841 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 734841 ']' 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 734841 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734841 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734841' 00:24:41.637 killing process with pid 734841 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 734841 00:24:41.637 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 734841 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.897 05:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.434 00:24:44.434 real 0m40.615s 00:24:44.434 user 1m50.498s 00:24:44.434 sys 0m11.318s 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.434 ************************************ 00:24:44.434 END TEST nvmf_host_multipath_status 00:24:44.434 ************************************ 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.434 05:01:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.434 ************************************ 00:24:44.434 START TEST nvmf_discovery_remove_ifc 00:24:44.434 ************************************ 00:24:44.434 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:44.434 * Looking for test storage... 00:24:44.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.434 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:44.434 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:44.434 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.435 --rc genhtml_branch_coverage=1 00:24:44.435 --rc genhtml_function_coverage=1 00:24:44.435 --rc genhtml_legend=1 00:24:44.435 --rc geninfo_all_blocks=1 00:24:44.435 --rc geninfo_unexecuted_blocks=1 00:24:44.435 00:24:44.435 ' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.435 --rc genhtml_branch_coverage=1 00:24:44.435 --rc genhtml_function_coverage=1 00:24:44.435 --rc genhtml_legend=1 00:24:44.435 --rc geninfo_all_blocks=1 00:24:44.435 --rc geninfo_unexecuted_blocks=1 00:24:44.435 00:24:44.435 ' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.435 --rc genhtml_branch_coverage=1 00:24:44.435 --rc genhtml_function_coverage=1 00:24:44.435 --rc genhtml_legend=1 00:24:44.435 --rc geninfo_all_blocks=1 00:24:44.435 --rc geninfo_unexecuted_blocks=1 00:24:44.435 00:24:44.435 ' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.435 --rc genhtml_branch_coverage=1 00:24:44.435 --rc genhtml_function_coverage=1 00:24:44.435 --rc genhtml_legend=1 00:24:44.435 --rc geninfo_all_blocks=1 00:24:44.435 --rc geninfo_unexecuted_blocks=1 00:24:44.435 00:24:44.435 ' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:44.435 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.436 05:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:49.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:49.711 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:49.711 Found net devices under 0000:af:00.0: cvl_0_0 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:49.711 Found net devices under 0000:af:00.1: cvl_0_1 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.711 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.971 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.971 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.971 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.971 05:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:24:49.971 00:24:49.971 --- 10.0.0.2 ping statistics --- 00:24:49.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.971 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:49.971 00:24:49.971 --- 10.0.0.1 ping statistics --- 00:24:49.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.971 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=743660 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 743660 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 743660 ']' 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.971 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.231 [2024-12-10 05:01:41.135363] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:24:50.231 [2024-12-10 05:01:41.135410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.231 [2024-12-10 05:01:41.211348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.231 [2024-12-10 05:01:41.249148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.231 [2024-12-10 05:01:41.249183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.231 [2024-12-10 05:01:41.249190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.231 [2024-12-10 05:01:41.249196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.231 [2024-12-10 05:01:41.249201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.231 [2024-12-10 05:01:41.249666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.231 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.231 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:50.231 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:50.231 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.231 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.490 [2024-12-10 05:01:41.405100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.490 [2024-12-10 05:01:41.413278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:50.490 null0 00:24:50.490 [2024-12-10 05:01:41.445255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=743682 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 743682 /tmp/host.sock 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 743682 ']' 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:50.490 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.490 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.490 [2024-12-10 05:01:41.514495] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:24:50.490 [2024-12-10 05:01:41.514535] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743682 ] 00:24:50.490 [2024-12-10 05:01:41.586827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.749 [2024-12-10 05:01:41.628798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.749 05:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.687 [2024-12-10 05:01:42.771216] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.687 [2024-12-10 05:01:42.771234] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.687 [2024-12-10 05:01:42.771249] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.945 [2024-12-10 05:01:42.900647] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:52.205 [2024-12-10 05:01:43.082619] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:52.205 [2024-12-10 05:01:43.083244] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xea7a90:1 started. 00:24:52.205 [2024-12-10 05:01:43.084552] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:52.205 [2024-12-10 05:01:43.084588] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:52.205 [2024-12-10 05:01:43.084607] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:52.205 [2024-12-10 05:01:43.084619] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:52.205 [2024-12-10 05:01:43.084635] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.205 [2024-12-10 05:01:43.090886] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xea7a90 was disconnected and freed. delete nvme_qpair. 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.205 05:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.583 05:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.522 05:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.459 05:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.396 05:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.775 [2024-12-10 05:01:48.526230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:57.775 [2024-12-10 05:01:48.526267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.775 [2024-12-10 05:01:48.526278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.775 [2024-12-10 05:01:48.526288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.775 [2024-12-10 05:01:48.526296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.775 [2024-12-10 05:01:48.526304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.775 [2024-12-10 05:01:48.526311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.775 [2024-12-10 05:01:48.526318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.775 [2024-12-10 05:01:48.526325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.775 [2024-12-10 05:01:48.526332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.775 [2024-12-10 05:01:48.526338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.775 [2024-12-10 05:01:48.526346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe842b0 is same with the state(6) to be set 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.775 05:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.775 [2024-12-10 05:01:48.536251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe842b0 (9): Bad file descriptor 00:24:57.775 [2024-12-10 05:01:48.546287] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.775 [2024-12-10 05:01:48.546301] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.775 [2024-12-10 05:01:48.546307] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.775 [2024-12-10 05:01:48.546315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.775 [2024-12-10 05:01:48.546336] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.712 [2024-12-10 05:01:49.608224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:58.712 [2024-12-10 05:01:49.608308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe842b0 with addr=10.0.0.2, port=4420 00:24:58.712 [2024-12-10 05:01:49.608343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe842b0 is same with the state(6) to be set 00:24:58.712 [2024-12-10 05:01:49.608397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe842b0 (9): Bad file descriptor 00:24:58.712 [2024-12-10 05:01:49.609346] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:58.712 [2024-12-10 05:01:49.609409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.712 [2024-12-10 05:01:49.609433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.712 [2024-12-10 05:01:49.609456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.712 [2024-12-10 05:01:49.609476] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.712 [2024-12-10 05:01:49.609492] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.712 [2024-12-10 05:01:49.609504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.712 [2024-12-10 05:01:49.609527] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.712 [2024-12-10 05:01:49.609541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:58.712 05:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.649 [2024-12-10 05:01:50.612051] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.649 [2024-12-10 05:01:50.612080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.650 [2024-12-10 05:01:50.612093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.650 [2024-12-10 05:01:50.612101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.650 [2024-12-10 05:01:50.612114] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:59.650 [2024-12-10 05:01:50.612121] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.650 [2024-12-10 05:01:50.612126] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.650 [2024-12-10 05:01:50.612131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.650 [2024-12-10 05:01:50.612151] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:59.650 [2024-12-10 05:01:50.612179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.650 [2024-12-10 05:01:50.612189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.650 [2024-12-10 05:01:50.612199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.650 [2024-12-10 05:01:50.612205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.650 [2024-12-10 05:01:50.612213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.650 [2024-12-10 05:01:50.612219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.650 [2024-12-10 05:01:50.612226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.650 [2024-12-10 05:01:50.612234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.650 [2024-12-10 05:01:50.612241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.650 [2024-12-10 05:01:50.612248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.650 [2024-12-10 05:01:50.612255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:59.650 [2024-12-10 05:01:50.612582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe739a0 (9): Bad file descriptor 00:24:59.650 [2024-12-10 05:01:50.613592] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:59.650 [2024-12-10 05:01:50.613604] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.650 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:59.909 05:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:00.845 05:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.781 [2024-12-10 05:01:52.668312] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:01.781 [2024-12-10 05:01:52.668329] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:01.781 [2024-12-10 05:01:52.668342] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.781 [2024-12-10 05:01:52.754592] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:01.781 [2024-12-10 05:01:52.857316] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:01.781 [2024-12-10 05:01:52.857933] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xe8eaa0:1 started. 00:25:01.781 [2024-12-10 05:01:52.858958] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:01.781 [2024-12-10 05:01:52.858989] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:01.781 [2024-12-10 05:01:52.859006] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:01.781 [2024-12-10 05:01:52.859017] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:01.781 [2024-12-10 05:01:52.859024] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:01.781 [2024-12-10 05:01:52.865659] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xe8eaa0 was disconnected and freed. delete nvme_qpair. 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 743682 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 743682 ']' 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 743682 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.041 05:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 743682 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 743682' 00:25:02.041 killing process with pid 743682 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 743682 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 743682 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.041 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.300 rmmod nvme_tcp 00:25:02.301 rmmod nvme_fabrics 00:25:02.301 rmmod nvme_keyring 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 743660 ']' 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 743660 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 743660 ']' 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 743660 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 743660 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 743660' 00:25:02.301 killing process with pid 743660 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 743660 00:25:02.301 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 743660 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.560 05:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:04.465 00:25:04.465 real 0m20.484s 00:25:04.465 user 0m24.907s 00:25:04.465 sys 0m5.746s 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.465 ************************************ 00:25:04.465 END TEST nvmf_discovery_remove_ifc 00:25:04.465 ************************************ 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.465 ************************************ 00:25:04.465 START TEST nvmf_identify_kernel_target 00:25:04.465 ************************************ 00:25:04.465 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:04.725 * Looking for test storage... 00:25:04.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:04.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.725 --rc genhtml_branch_coverage=1 00:25:04.725 --rc genhtml_function_coverage=1 00:25:04.725 --rc genhtml_legend=1 00:25:04.725 --rc geninfo_all_blocks=1 00:25:04.725 --rc geninfo_unexecuted_blocks=1 00:25:04.725 00:25:04.725 ' 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:04.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.725 --rc genhtml_branch_coverage=1 00:25:04.725 --rc genhtml_function_coverage=1 00:25:04.725 --rc genhtml_legend=1 00:25:04.725 --rc geninfo_all_blocks=1 00:25:04.725 --rc geninfo_unexecuted_blocks=1 00:25:04.725 00:25:04.725 ' 00:25:04.725 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:04.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.725 --rc genhtml_branch_coverage=1 00:25:04.725 --rc genhtml_function_coverage=1 00:25:04.725 --rc genhtml_legend=1 00:25:04.725 --rc geninfo_all_blocks=1 00:25:04.725 --rc geninfo_unexecuted_blocks=1 00:25:04.725 00:25:04.726 ' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:04.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.726 --rc genhtml_branch_coverage=1 00:25:04.726 --rc genhtml_function_coverage=1 00:25:04.726 --rc genhtml_legend=1 00:25:04.726 --rc geninfo_all_blocks=1 00:25:04.726 --rc geninfo_unexecuted_blocks=1 00:25:04.726 00:25:04.726 ' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:04.726 05:01:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:11.297 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:11.297 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.297 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:11.298 Found net devices under 0000:af:00.0: cvl_0_0 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:11.298 Found net devices under 0000:af:00.1: cvl_0_1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:25:11.298 00:25:11.298 --- 10.0.0.2 ping statistics --- 00:25:11.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.298 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:25:11.298 00:25:11.298 --- 10.0.0.1 ping statistics --- 00:25:11.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.298 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:11.298 05:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:13.203 Waiting for block devices as requested 00:25:13.462 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:13.462 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:13.721 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.721 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.721 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.721 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.980 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.980 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.980 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.239 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:14.239 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:14.239 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:14.239 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:14.497 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:14.497 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:14.497 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:14.756 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:14.756 No valid GPT data, bailing 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:14.756 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:15.016 00:25:15.016 Discovery Log Number of Records 2, Generation counter 2 00:25:15.016 =====Discovery Log Entry 0====== 00:25:15.016 trtype: tcp 00:25:15.016 adrfam: ipv4 00:25:15.016 subtype: current discovery subsystem 00:25:15.016 treq: not specified, sq flow control disable supported 00:25:15.016 portid: 1 00:25:15.016 trsvcid: 4420 00:25:15.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:15.016 traddr: 10.0.0.1 00:25:15.016 eflags: none 00:25:15.016 sectype: none 00:25:15.016 =====Discovery Log Entry 1====== 00:25:15.016 trtype: tcp 00:25:15.016 adrfam: ipv4 00:25:15.016 subtype: nvme subsystem 00:25:15.016 treq: not specified, sq flow control disable supported 00:25:15.016 portid: 1 00:25:15.016 trsvcid: 4420 00:25:15.016 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:15.016 traddr: 10.0.0.1 00:25:15.016 eflags: none 00:25:15.016 sectype: none 00:25:15.016 05:02:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:15.016 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:15.016 ===================================================== 00:25:15.016 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:15.016 ===================================================== 00:25:15.016 Controller Capabilities/Features 00:25:15.016 ================================ 00:25:15.016 Vendor ID: 0000 00:25:15.016 Subsystem Vendor ID: 0000 00:25:15.016 Serial Number: 366e77eccd1cee76424c 00:25:15.016 Model Number: Linux 00:25:15.016 Firmware Version: 6.8.9-20 00:25:15.016 Recommended Arb Burst: 0 00:25:15.016 IEEE OUI Identifier: 00 00 00 00:25:15.016 Multi-path I/O 00:25:15.016 May have multiple subsystem ports: No 00:25:15.016 May have multiple controllers: No 00:25:15.016 Associated with SR-IOV VF: No 00:25:15.016 Max Data Transfer Size: Unlimited 00:25:15.017 Max Number of Namespaces: 0 00:25:15.017 Max Number of I/O Queues: 1024 00:25:15.017 NVMe Specification Version (VS): 1.3 00:25:15.017 NVMe Specification Version (Identify): 1.3 00:25:15.017 Maximum Queue Entries: 1024 00:25:15.017 Contiguous Queues Required: No 00:25:15.017 Arbitration Mechanisms Supported 00:25:15.017 Weighted Round Robin: Not Supported 00:25:15.017 Vendor Specific: Not Supported 00:25:15.017 Reset Timeout: 7500 ms 00:25:15.017 Doorbell Stride: 4 bytes 00:25:15.017 NVM Subsystem Reset: Not Supported 00:25:15.017 Command Sets Supported 00:25:15.017 NVM Command Set: Supported 00:25:15.017 Boot Partition: Not Supported 00:25:15.017 Memory Page Size Minimum: 4096 bytes 00:25:15.017 Memory Page Size Maximum: 4096 bytes 00:25:15.017 Persistent Memory Region: Not Supported 00:25:15.017 Optional Asynchronous Events Supported 00:25:15.017 Namespace Attribute Notices: Not Supported 00:25:15.017 Firmware Activation Notices: Not Supported 00:25:15.017 ANA Change Notices: Not Supported 00:25:15.017 PLE Aggregate Log Change Notices: Not Supported 00:25:15.017 LBA Status Info Alert Notices: Not Supported 00:25:15.017 EGE Aggregate Log Change Notices: Not Supported 00:25:15.017 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.017 Zone Descriptor Change Notices: Not Supported 00:25:15.017 Discovery Log Change Notices: Supported 00:25:15.017 Controller Attributes 00:25:15.017 128-bit Host Identifier: Not Supported 00:25:15.017 Non-Operational Permissive Mode: Not Supported 00:25:15.017 NVM Sets: Not Supported 00:25:15.017 Read Recovery Levels: Not Supported 00:25:15.017 Endurance Groups: Not Supported 00:25:15.017 Predictable Latency Mode: Not Supported 00:25:15.017 Traffic Based Keep ALive: Not Supported 00:25:15.017 Namespace Granularity: Not Supported 00:25:15.017 SQ Associations: Not Supported 00:25:15.017 UUID List: Not Supported 00:25:15.017 Multi-Domain Subsystem: Not Supported 00:25:15.017 Fixed Capacity Management: Not Supported 00:25:15.017 Variable Capacity Management: Not Supported 00:25:15.017 Delete Endurance Group: Not Supported 00:25:15.017 Delete NVM Set: Not Supported 00:25:15.017 Extended LBA Formats Supported: Not Supported 00:25:15.017 Flexible Data Placement Supported: Not Supported 00:25:15.017 00:25:15.017 Controller Memory Buffer Support 00:25:15.017 ================================ 00:25:15.017 Supported: No 00:25:15.017 00:25:15.017 Persistent Memory Region Support 00:25:15.017 ================================ 00:25:15.017 Supported: No 00:25:15.017 00:25:15.017 Admin Command Set Attributes 00:25:15.017 ============================ 00:25:15.017 Security Send/Receive: Not Supported 00:25:15.017 Format NVM: Not Supported 00:25:15.017 Firmware Activate/Download: Not Supported 00:25:15.017 Namespace Management: Not Supported 00:25:15.017 Device Self-Test: Not Supported 00:25:15.017 Directives: Not Supported 00:25:15.017 NVMe-MI: Not Supported 00:25:15.017 Virtualization Management: Not Supported 00:25:15.017 Doorbell Buffer Config: Not Supported 00:25:15.017 Get LBA Status Capability: Not Supported 00:25:15.017 Command & Feature Lockdown Capability: Not Supported 00:25:15.017 Abort Command Limit: 1 00:25:15.017 Async Event Request Limit: 1 00:25:15.017 Number of Firmware Slots: N/A 00:25:15.017 Firmware Slot 1 Read-Only: N/A 00:25:15.017 Firmware Activation Without Reset: N/A 00:25:15.017 Multiple Update Detection Support: N/A 00:25:15.017 Firmware Update Granularity: No Information Provided 00:25:15.017 Per-Namespace SMART Log: No 00:25:15.017 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.017 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:15.017 Command Effects Log Page: Not Supported 00:25:15.017 Get Log Page Extended Data: Supported 00:25:15.017 Telemetry Log Pages: Not Supported 00:25:15.017 Persistent Event Log Pages: Not Supported 00:25:15.017 Supported Log Pages Log Page: May Support 00:25:15.017 Commands Supported & Effects Log Page: Not Supported 00:25:15.017 Feature Identifiers & Effects Log Page:May Support 00:25:15.017 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.017 Data Area 4 for Telemetry Log: Not Supported 00:25:15.017 Error Log Page Entries Supported: 1 00:25:15.017 Keep Alive: Not Supported 00:25:15.017 00:25:15.017 NVM Command Set Attributes 00:25:15.017 ========================== 00:25:15.017 Submission Queue Entry Size 00:25:15.017 Max: 1 00:25:15.017 Min: 1 00:25:15.017 Completion Queue Entry Size 00:25:15.017 Max: 1 00:25:15.017 Min: 1 00:25:15.017 Number of Namespaces: 0 00:25:15.017 Compare Command: Not Supported 00:25:15.017 Write Uncorrectable Command: Not Supported 00:25:15.017 Dataset Management Command: Not Supported 00:25:15.017 Write Zeroes Command: Not Supported 00:25:15.017 Set Features Save Field: Not Supported 00:25:15.017 Reservations: Not Supported 00:25:15.017 Timestamp: Not Supported 00:25:15.017 Copy: Not Supported 00:25:15.017 Volatile Write Cache: Not Present 00:25:15.017 Atomic Write Unit (Normal): 1 00:25:15.017 Atomic Write Unit (PFail): 1 00:25:15.017 Atomic Compare & Write Unit: 1 00:25:15.017 Fused Compare & Write: Not Supported 00:25:15.017 Scatter-Gather List 00:25:15.017 SGL Command Set: Supported 00:25:15.017 SGL Keyed: Not Supported 00:25:15.017 SGL Bit Bucket Descriptor: Not Supported 00:25:15.017 SGL Metadata Pointer: Not Supported 00:25:15.017 Oversized SGL: Not Supported 00:25:15.017 SGL Metadata Address: Not Supported 00:25:15.017 SGL Offset: Supported 00:25:15.017 Transport SGL Data Block: Not Supported 00:25:15.017 Replay Protected Memory Block: Not Supported 00:25:15.017 00:25:15.017 Firmware Slot Information 00:25:15.017 ========================= 00:25:15.017 Active slot: 0 00:25:15.017 00:25:15.017 00:25:15.017 Error Log 00:25:15.017 ========= 00:25:15.017 00:25:15.017 Active Namespaces 00:25:15.017 ================= 00:25:15.017 Discovery Log Page 00:25:15.017 ================== 00:25:15.017 Generation Counter: 2 00:25:15.017 Number of Records: 2 00:25:15.017 Record Format: 0 00:25:15.017 00:25:15.017 Discovery Log Entry 0 00:25:15.017 ---------------------- 00:25:15.017 Transport Type: 3 (TCP) 00:25:15.017 Address Family: 1 (IPv4) 00:25:15.017 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:15.017 Entry Flags: 00:25:15.017 Duplicate Returned Information: 0 00:25:15.017 Explicit Persistent Connection Support for Discovery: 0 00:25:15.017 Transport Requirements: 00:25:15.017 Secure Channel: Not Specified 00:25:15.017 Port ID: 1 (0x0001) 00:25:15.017 Controller ID: 65535 (0xffff) 00:25:15.017 Admin Max SQ Size: 32 00:25:15.017 Transport Service Identifier: 4420 00:25:15.017 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:15.017 Transport Address: 10.0.0.1 00:25:15.017 Discovery Log Entry 1 00:25:15.017 ---------------------- 00:25:15.017 Transport Type: 3 (TCP) 00:25:15.017 Address Family: 1 (IPv4) 00:25:15.017 Subsystem Type: 2 (NVM Subsystem) 00:25:15.017 Entry Flags: 00:25:15.017 Duplicate Returned Information: 0 00:25:15.017 Explicit Persistent Connection Support for Discovery: 0 00:25:15.017 Transport Requirements: 00:25:15.017 Secure Channel: Not Specified 00:25:15.017 Port ID: 1 (0x0001) 00:25:15.017 Controller ID: 65535 (0xffff) 00:25:15.017 Admin Max SQ Size: 32 00:25:15.017 Transport Service Identifier: 4420 00:25:15.017 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:15.017 Transport Address: 10.0.0.1 00:25:15.017 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:15.017 get_feature(0x01) failed 00:25:15.017 get_feature(0x02) failed 00:25:15.017 get_feature(0x04) failed 00:25:15.017 ===================================================== 00:25:15.017 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:15.017 ===================================================== 00:25:15.017 Controller Capabilities/Features 00:25:15.017 ================================ 00:25:15.017 Vendor ID: 0000 00:25:15.017 Subsystem Vendor ID: 0000 00:25:15.017 Serial Number: 2eb9b05fc4753142ff69 00:25:15.017 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:15.017 Firmware Version: 6.8.9-20 00:25:15.017 Recommended Arb Burst: 6 00:25:15.017 IEEE OUI Identifier: 00 00 00 00:25:15.017 Multi-path I/O 00:25:15.017 May have multiple subsystem ports: Yes 00:25:15.017 May have multiple controllers: Yes 00:25:15.017 Associated with SR-IOV VF: No 00:25:15.017 Max Data Transfer Size: Unlimited 00:25:15.017 Max Number of Namespaces: 1024 00:25:15.017 Max Number of I/O Queues: 128 00:25:15.017 NVMe Specification Version (VS): 1.3 00:25:15.017 NVMe Specification Version (Identify): 1.3 00:25:15.017 Maximum Queue Entries: 1024 00:25:15.017 Contiguous Queues Required: No 00:25:15.017 Arbitration Mechanisms Supported 00:25:15.017 Weighted Round Robin: Not Supported 00:25:15.017 Vendor Specific: Not Supported 00:25:15.017 Reset Timeout: 7500 ms 00:25:15.018 Doorbell Stride: 4 bytes 00:25:15.018 NVM Subsystem Reset: Not Supported 00:25:15.018 Command Sets Supported 00:25:15.018 NVM Command Set: Supported 00:25:15.018 Boot Partition: Not Supported 00:25:15.018 Memory Page Size Minimum: 4096 bytes 00:25:15.018 Memory Page Size Maximum: 4096 bytes 00:25:15.018 Persistent Memory Region: Not Supported 00:25:15.018 Optional Asynchronous Events Supported 00:25:15.018 Namespace Attribute Notices: Supported 00:25:15.018 Firmware Activation Notices: Not Supported 00:25:15.018 ANA Change Notices: Supported 00:25:15.018 PLE Aggregate Log Change Notices: Not Supported 00:25:15.018 LBA Status Info Alert Notices: Not Supported 00:25:15.018 EGE Aggregate Log Change Notices: Not Supported 00:25:15.018 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.018 Zone Descriptor Change Notices: Not Supported 00:25:15.018 Discovery Log Change Notices: Not Supported 00:25:15.018 Controller Attributes 00:25:15.018 128-bit Host Identifier: Supported 00:25:15.018 Non-Operational Permissive Mode: Not Supported 00:25:15.018 NVM Sets: Not Supported 00:25:15.018 Read Recovery Levels: Not Supported 00:25:15.018 Endurance Groups: Not Supported 00:25:15.018 Predictable Latency Mode: Not Supported 00:25:15.018 Traffic Based Keep ALive: Supported 00:25:15.018 Namespace Granularity: Not Supported 00:25:15.018 SQ Associations: Not Supported 00:25:15.018 UUID List: Not Supported 00:25:15.018 Multi-Domain Subsystem: Not Supported 00:25:15.018 Fixed Capacity Management: Not Supported 00:25:15.018 Variable Capacity Management: Not Supported 00:25:15.018 Delete Endurance Group: Not Supported 00:25:15.018 Delete NVM Set: Not Supported 00:25:15.018 Extended LBA Formats Supported: Not Supported 00:25:15.018 Flexible Data Placement Supported: Not Supported 00:25:15.018 00:25:15.018 Controller Memory Buffer Support 00:25:15.018 ================================ 00:25:15.018 Supported: No 00:25:15.018 00:25:15.018 Persistent Memory Region Support 00:25:15.018 ================================ 00:25:15.018 Supported: No 00:25:15.018 00:25:15.018 Admin Command Set Attributes 00:25:15.018 ============================ 00:25:15.018 Security Send/Receive: Not Supported 00:25:15.018 Format NVM: Not Supported 00:25:15.018 Firmware Activate/Download: Not Supported 00:25:15.018 Namespace Management: Not Supported 00:25:15.018 Device Self-Test: Not Supported 00:25:15.018 Directives: Not Supported 00:25:15.018 NVMe-MI: Not Supported 00:25:15.018 Virtualization Management: Not Supported 00:25:15.018 Doorbell Buffer Config: Not Supported 00:25:15.018 Get LBA Status Capability: Not Supported 00:25:15.018 Command & Feature Lockdown Capability: Not Supported 00:25:15.018 Abort Command Limit: 4 00:25:15.018 Async Event Request Limit: 4 00:25:15.018 Number of Firmware Slots: N/A 00:25:15.018 Firmware Slot 1 Read-Only: N/A 00:25:15.018 Firmware Activation Without Reset: N/A 00:25:15.018 Multiple Update Detection Support: N/A 00:25:15.018 Firmware Update Granularity: No Information Provided 00:25:15.018 Per-Namespace SMART Log: Yes 00:25:15.018 Asymmetric Namespace Access Log Page: Supported 00:25:15.018 ANA Transition Time : 10 sec 00:25:15.018 00:25:15.018 Asymmetric Namespace Access Capabilities 00:25:15.018 ANA Optimized State : Supported 00:25:15.018 ANA Non-Optimized State : Supported 00:25:15.018 ANA Inaccessible State : Supported 00:25:15.018 ANA Persistent Loss State : Supported 00:25:15.018 ANA Change State : Supported 00:25:15.018 ANAGRPID is not changed : No 00:25:15.018 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:15.018 00:25:15.018 ANA Group Identifier Maximum : 128 00:25:15.018 Number of ANA Group Identifiers : 128 00:25:15.018 Max Number of Allowed Namespaces : 1024 00:25:15.018 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:15.018 Command Effects Log Page: Supported 00:25:15.018 Get Log Page Extended Data: Supported 00:25:15.018 Telemetry Log Pages: Not Supported 00:25:15.018 Persistent Event Log Pages: Not Supported 00:25:15.018 Supported Log Pages Log Page: May Support 00:25:15.018 Commands Supported & Effects Log Page: Not Supported 00:25:15.018 Feature Identifiers & Effects Log Page:May Support 00:25:15.018 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.018 Data Area 4 for Telemetry Log: Not Supported 00:25:15.018 Error Log Page Entries Supported: 128 00:25:15.018 Keep Alive: Supported 00:25:15.018 Keep Alive Granularity: 1000 ms 00:25:15.018 00:25:15.018 NVM Command Set Attributes 00:25:15.018 ========================== 00:25:15.018 Submission Queue Entry Size 00:25:15.018 Max: 64 00:25:15.018 Min: 64 00:25:15.018 Completion Queue Entry Size 00:25:15.018 Max: 16 00:25:15.018 Min: 16 00:25:15.018 Number of Namespaces: 1024 00:25:15.018 Compare Command: Not Supported 00:25:15.018 Write Uncorrectable Command: Not Supported 00:25:15.018 Dataset Management Command: Supported 00:25:15.018 Write Zeroes Command: Supported 00:25:15.018 Set Features Save Field: Not Supported 00:25:15.018 Reservations: Not Supported 00:25:15.018 Timestamp: Not Supported 00:25:15.018 Copy: Not Supported 00:25:15.018 Volatile Write Cache: Present 00:25:15.018 Atomic Write Unit (Normal): 1 00:25:15.018 Atomic Write Unit (PFail): 1 00:25:15.018 Atomic Compare & Write Unit: 1 00:25:15.018 Fused Compare & Write: Not Supported 00:25:15.018 Scatter-Gather List 00:25:15.018 SGL Command Set: Supported 00:25:15.018 SGL Keyed: Not Supported 00:25:15.018 SGL Bit Bucket Descriptor: Not Supported 00:25:15.018 SGL Metadata Pointer: Not Supported 00:25:15.018 Oversized SGL: Not Supported 00:25:15.018 SGL Metadata Address: Not Supported 00:25:15.018 SGL Offset: Supported 00:25:15.018 Transport SGL Data Block: Not Supported 00:25:15.018 Replay Protected Memory Block: Not Supported 00:25:15.018 00:25:15.018 Firmware Slot Information 00:25:15.018 ========================= 00:25:15.018 Active slot: 0 00:25:15.018 00:25:15.018 Asymmetric Namespace Access 00:25:15.018 =========================== 00:25:15.018 Change Count : 0 00:25:15.018 Number of ANA Group Descriptors : 1 00:25:15.018 ANA Group Descriptor : 0 00:25:15.018 ANA Group ID : 1 00:25:15.018 Number of NSID Values : 1 00:25:15.018 Change Count : 0 00:25:15.018 ANA State : 1 00:25:15.018 Namespace Identifier : 1 00:25:15.018 00:25:15.018 Commands Supported and Effects 00:25:15.018 ============================== 00:25:15.018 Admin Commands 00:25:15.018 -------------- 00:25:15.018 Get Log Page (02h): Supported 00:25:15.018 Identify (06h): Supported 00:25:15.018 Abort (08h): Supported 00:25:15.018 Set Features (09h): Supported 00:25:15.018 Get Features (0Ah): Supported 00:25:15.018 Asynchronous Event Request (0Ch): Supported 00:25:15.018 Keep Alive (18h): Supported 00:25:15.018 I/O Commands 00:25:15.018 ------------ 00:25:15.018 Flush (00h): Supported 00:25:15.018 Write (01h): Supported LBA-Change 00:25:15.018 Read (02h): Supported 00:25:15.018 Write Zeroes (08h): Supported LBA-Change 00:25:15.018 Dataset Management (09h): Supported 00:25:15.018 00:25:15.018 Error Log 00:25:15.018 ========= 00:25:15.018 Entry: 0 00:25:15.018 Error Count: 0x3 00:25:15.018 Submission Queue Id: 0x0 00:25:15.018 Command Id: 0x5 00:25:15.018 Phase Bit: 0 00:25:15.018 Status Code: 0x2 00:25:15.018 Status Code Type: 0x0 00:25:15.018 Do Not Retry: 1 00:25:15.018 Error Location: 0x28 00:25:15.018 LBA: 0x0 00:25:15.018 Namespace: 0x0 00:25:15.018 Vendor Log Page: 0x0 00:25:15.018 ----------- 00:25:15.018 Entry: 1 00:25:15.018 Error Count: 0x2 00:25:15.018 Submission Queue Id: 0x0 00:25:15.018 Command Id: 0x5 00:25:15.018 Phase Bit: 0 00:25:15.018 Status Code: 0x2 00:25:15.018 Status Code Type: 0x0 00:25:15.018 Do Not Retry: 1 00:25:15.018 Error Location: 0x28 00:25:15.018 LBA: 0x0 00:25:15.018 Namespace: 0x0 00:25:15.018 Vendor Log Page: 0x0 00:25:15.018 ----------- 00:25:15.018 Entry: 2 00:25:15.018 Error Count: 0x1 00:25:15.018 Submission Queue Id: 0x0 00:25:15.018 Command Id: 0x4 00:25:15.018 Phase Bit: 0 00:25:15.018 Status Code: 0x2 00:25:15.018 Status Code Type: 0x0 00:25:15.018 Do Not Retry: 1 00:25:15.018 Error Location: 0x28 00:25:15.018 LBA: 0x0 00:25:15.018 Namespace: 0x0 00:25:15.018 Vendor Log Page: 0x0 00:25:15.018 00:25:15.018 Number of Queues 00:25:15.018 ================ 00:25:15.018 Number of I/O Submission Queues: 128 00:25:15.018 Number of I/O Completion Queues: 128 00:25:15.018 00:25:15.018 ZNS Specific Controller Data 00:25:15.018 ============================ 00:25:15.018 Zone Append Size Limit: 0 00:25:15.018 00:25:15.018 00:25:15.018 Active Namespaces 00:25:15.018 ================= 00:25:15.018 get_feature(0x05) failed 00:25:15.018 Namespace ID:1 00:25:15.018 Command Set Identifier: NVM (00h) 00:25:15.018 Deallocate: Supported 00:25:15.018 Deallocated/Unwritten Error: Not Supported 00:25:15.018 Deallocated Read Value: Unknown 00:25:15.019 Deallocate in Write Zeroes: Not Supported 00:25:15.019 Deallocated Guard Field: 0xFFFF 00:25:15.019 Flush: Supported 00:25:15.019 Reservation: Not Supported 00:25:15.019 Namespace Sharing Capabilities: Multiple Controllers 00:25:15.019 Size (in LBAs): 1953525168 (931GiB) 00:25:15.019 Capacity (in LBAs): 1953525168 (931GiB) 00:25:15.019 Utilization (in LBAs): 1953525168 (931GiB) 00:25:15.019 UUID: c7cb03dc-961d-441d-a74c-6fb34d9a51b4 00:25:15.019 Thin Provisioning: Not Supported 00:25:15.019 Per-NS Atomic Units: Yes 00:25:15.019 Atomic Boundary Size (Normal): 0 00:25:15.019 Atomic Boundary Size (PFail): 0 00:25:15.019 Atomic Boundary Offset: 0 00:25:15.019 NGUID/EUI64 Never Reused: No 00:25:15.019 ANA group ID: 1 00:25:15.019 Namespace Write Protected: No 00:25:15.019 Number of LBA Formats: 1 00:25:15.019 Current LBA Format: LBA Format #00 00:25:15.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:15.019 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:15.019 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:15.278 rmmod nvme_tcp 00:25:15.278 rmmod nvme_fabrics 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.278 05:02:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:17.208 05:02:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.574 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:20.574 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:21.143 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:21.143 00:25:21.143 real 0m16.554s 00:25:21.143 user 0m4.286s 00:25:21.143 sys 0m8.646s 00:25:21.143 05:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.143 05:02:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.143 ************************************ 00:25:21.143 END TEST nvmf_identify_kernel_target 00:25:21.143 ************************************ 00:25:21.143 05:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:21.143 05:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:21.144 05:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.144 05:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.144 ************************************ 00:25:21.144 START TEST nvmf_auth_host 00:25:21.144 ************************************ 00:25:21.144 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:21.404 * Looking for test storage... 00:25:21.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.404 --rc genhtml_branch_coverage=1 00:25:21.404 --rc genhtml_function_coverage=1 00:25:21.404 --rc genhtml_legend=1 00:25:21.404 --rc geninfo_all_blocks=1 00:25:21.404 --rc geninfo_unexecuted_blocks=1 00:25:21.404 00:25:21.404 ' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.404 --rc genhtml_branch_coverage=1 00:25:21.404 --rc genhtml_function_coverage=1 00:25:21.404 --rc genhtml_legend=1 00:25:21.404 --rc geninfo_all_blocks=1 00:25:21.404 --rc geninfo_unexecuted_blocks=1 00:25:21.404 00:25:21.404 ' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.404 --rc genhtml_branch_coverage=1 00:25:21.404 --rc genhtml_function_coverage=1 00:25:21.404 --rc genhtml_legend=1 00:25:21.404 --rc geninfo_all_blocks=1 00:25:21.404 --rc geninfo_unexecuted_blocks=1 00:25:21.404 00:25:21.404 ' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:21.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.404 --rc genhtml_branch_coverage=1 00:25:21.404 --rc genhtml_function_coverage=1 00:25:21.404 --rc genhtml_legend=1 00:25:21.404 --rc geninfo_all_blocks=1 00:25:21.404 --rc geninfo_unexecuted_blocks=1 00:25:21.404 00:25:21.404 ' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.404 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.405 05:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.978 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.978 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.978 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.978 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.978 05:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.978 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:25:27.979 00:25:27.979 --- 10.0.0.2 ping statistics --- 00:25:27.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.979 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:27.979 00:25:27.979 --- 10.0.0.1 ping statistics --- 00:25:27.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.979 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=755454 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 755454 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 755454 ']' 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b82194b464c583ffbfc50a969151b066 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uj4 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b82194b464c583ffbfc50a969151b066 0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b82194b464c583ffbfc50a969151b066 0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b82194b464c583ffbfc50a969151b066 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uj4 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uj4 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.uj4 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f1bd3a10c90d6f8cc3f5b64206125b928ae5858b5adb9de6e17caf08cec0bff1 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XlY 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f1bd3a10c90d6f8cc3f5b64206125b928ae5858b5adb9de6e17caf08cec0bff1 3 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f1bd3a10c90d6f8cc3f5b64206125b928ae5858b5adb9de6e17caf08cec0bff1 3 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f1bd3a10c90d6f8cc3f5b64206125b928ae5858b5adb9de6e17caf08cec0bff1 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XlY 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XlY 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XlY 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2d92403ded716d2b63c333d8ab1f408063bbbe29466b0bda 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OkL 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2d92403ded716d2b63c333d8ab1f408063bbbe29466b0bda 0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2d92403ded716d2b63c333d8ab1f408063bbbe29466b0bda 0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2d92403ded716d2b63c333d8ab1f408063bbbe29466b0bda 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OkL 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OkL 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OkL 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=137911c9b6bfdb1e6c17de7772eed4ad6013588420e7f20d 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nEV 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 137911c9b6bfdb1e6c17de7772eed4ad6013588420e7f20d 2 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 137911c9b6bfdb1e6c17de7772eed4ad6013588420e7f20d 2 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=137911c9b6bfdb1e6c17de7772eed4ad6013588420e7f20d 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:27.979 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nEV 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nEV 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nEV 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21f1c1e430bdb51a15ec883aee217940 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hkY 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21f1c1e430bdb51a15ec883aee217940 1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21f1c1e430bdb51a15ec883aee217940 1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21f1c1e430bdb51a15ec883aee217940 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hkY 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hkY 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hkY 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6600a8775bf04ff0a16cb3811f787961 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YVs 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6600a8775bf04ff0a16cb3811f787961 1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6600a8775bf04ff0a16cb3811f787961 1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6600a8775bf04ff0a16cb3811f787961 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YVs 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YVs 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.YVs 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=697e2c5b796ffe52d1dd3f29f5d1c87aa38d61f38c0de6dc 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9sh 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 697e2c5b796ffe52d1dd3f29f5d1c87aa38d61f38c0de6dc 2 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 697e2c5b796ffe52d1dd3f29f5d1c87aa38d61f38c0de6dc 2 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=697e2c5b796ffe52d1dd3f29f5d1c87aa38d61f38c0de6dc 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9sh 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9sh 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9sh 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:27.980 05:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9155f2369154fe8efaf203e582ba448a 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GEt 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9155f2369154fe8efaf203e582ba448a 0 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9155f2369154fe8efaf203e582ba448a 0 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9155f2369154fe8efaf203e582ba448a 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GEt 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GEt 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GEt 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c378f68ba4111ce15ff8b17aa0f9e377116bc618c7ef86e76be57b75b8171b7f 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.13s 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c378f68ba4111ce15ff8b17aa0f9e377116bc618c7ef86e76be57b75b8171b7f 3 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c378f68ba4111ce15ff8b17aa0f9e377116bc618c7ef86e76be57b75b8171b7f 3 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c378f68ba4111ce15ff8b17aa0f9e377116bc618c7ef86e76be57b75b8171b7f 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.13s 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.13s 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.13s 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 755454 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 755454 ']' 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.980 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uj4 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XlY ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XlY 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OkL 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nEV ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nEV 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hkY 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.YVs ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YVs 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9sh 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.240 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GEt ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GEt 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.13s 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:28.499 05:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:31.033 Waiting for block devices as requested 00:25:31.033 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:31.293 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.293 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:31.293 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:31.293 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:31.551 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:31.551 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:31.551 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:31.810 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:31.810 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.810 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:31.810 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:32.069 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:32.069 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:32.069 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:32.327 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:32.327 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:32.894 No valid GPT data, bailing 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:32.894 05:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:33.153 00:25:33.153 Discovery Log Number of Records 2, Generation counter 2 00:25:33.153 =====Discovery Log Entry 0====== 00:25:33.153 trtype: tcp 00:25:33.153 adrfam: ipv4 00:25:33.153 subtype: current discovery subsystem 00:25:33.153 treq: not specified, sq flow control disable supported 00:25:33.153 portid: 1 00:25:33.153 trsvcid: 4420 00:25:33.153 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:33.153 traddr: 10.0.0.1 00:25:33.153 eflags: none 00:25:33.153 sectype: none 00:25:33.153 =====Discovery Log Entry 1====== 00:25:33.153 trtype: tcp 00:25:33.153 adrfam: ipv4 00:25:33.153 subtype: nvme subsystem 00:25:33.153 treq: not specified, sq flow control disable supported 00:25:33.153 portid: 1 00:25:33.153 trsvcid: 4420 00:25:33.153 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:33.153 traddr: 10.0.0.1 00:25:33.153 eflags: none 00:25:33.153 sectype: none 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.153 nvme0n1 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.153 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 nvme0n1 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.412 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.413 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.413 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.413 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.671 nvme0n1 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.671 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.672 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 nvme0n1 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.931 05:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.189 nvme0n1 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.189 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.447 nvme0n1 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.448 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.707 nvme0n1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.707 nvme0n1 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.707 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.966 05:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.966 nvme0n1 00:25:34.966 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.966 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.966 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.966 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.966 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.966 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.225 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.226 nvme0n1 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.226 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.485 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.485 nvme0n1 00:25:35.486 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.486 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.486 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.486 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.486 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.745 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.004 nvme0n1 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.004 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.005 05:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.264 nvme0n1 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.264 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.523 nvme0n1 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.523 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.524 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.524 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.524 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.524 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.782 nvme0n1 00:25:36.782 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.782 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.782 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.782 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.782 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.782 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.041 05:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.300 nvme0n1 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.300 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.301 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.559 nvme0n1 00:25:37.559 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.559 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.559 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.559 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.559 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.559 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.818 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.818 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.818 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.818 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.818 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 05:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.078 nvme0n1 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.078 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.646 nvme0n1 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.646 05:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.905 nvme0n1 00:25:38.905 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.905 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.905 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.905 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.905 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.905 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.164 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.165 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.424 nvme0n1 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.424 05:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.991 nvme0n1 00:25:39.991 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.991 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.991 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.991 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.991 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.991 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.250 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.251 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.818 nvme0n1 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:40.818 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.819 05:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.386 nvme0n1 00:25:41.386 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.386 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.386 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.387 05:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.955 nvme0n1 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.955 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.213 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.214 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.782 nvme0n1 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.782 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.041 nvme0n1 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.041 05:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.041 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.042 nvme0n1 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.042 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.301 nvme0n1 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.301 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 nvme0n1 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.561 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.820 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.821 nvme0n1 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.821 05:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.079 nvme0n1 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.079 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.080 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.339 nvme0n1 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.339 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.598 nvme0n1 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.598 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.857 nvme0n1 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.857 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.858 05:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.117 nvme0n1 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.117 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.376 nvme0n1 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.376 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.377 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.635 nvme0n1 00:25:45.635 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.635 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.635 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.635 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.635 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.894 05:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.153 nvme0n1 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.153 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.413 nvme0n1 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.413 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.414 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.672 nvme0n1 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.672 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.931 05:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.190 nvme0n1 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.190 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.191 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.758 nvme0n1 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:47.758 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.759 05:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.017 nvme0n1 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.017 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.276 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.535 nvme0n1 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.535 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.536 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.103 nvme0n1 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.103 05:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.103 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.104 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.671 nvme0n1 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.671 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.672 05:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.240 nvme0n1 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.240 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.808 nvme0n1 00:25:50.808 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.808 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.808 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.808 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.808 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.808 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.067 05:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 nvme0n1 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.635 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.636 05:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.203 nvme0n1 00:25:52.203 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.203 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.203 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.203 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.203 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.203 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.204 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.463 nvme0n1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.463 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 nvme0n1 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.723 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.983 nvme0n1 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.983 05:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.983 nvme0n1 00:25:52.983 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.983 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.983 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.983 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.983 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.242 nvme0n1 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.242 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.501 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.502 nvme0n1 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.502 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.761 nvme0n1 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.761 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:54.020 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.021 05:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.021 nvme0n1 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.021 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.279 nvme0n1 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:54.279 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.280 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:54.280 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.280 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.280 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.538 nvme0n1 00:25:54.538 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.539 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.798 nvme0n1 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.798 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.057 05:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 nvme0n1 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.317 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.576 nvme0n1 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.576 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.835 nvme0n1 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.835 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.836 05:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.095 nvme0n1 00:25:56.095 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.095 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.095 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.095 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.095 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.095 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.354 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.613 nvme0n1 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.613 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.614 05:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.182 nvme0n1 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.182 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.441 nvme0n1 00:25:57.441 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.441 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.441 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.441 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.441 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.441 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.700 05:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.959 nvme0n1 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.959 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.960 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.527 nvme0n1 00:25:58.527 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.527 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.527 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.527 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.527 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgyMTk0YjQ2NGM1ODNmZmJmYzUwYTk2OTE1MWIwNjY81uHG: 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiZDNhMTBjOTBkNmY4Y2MzZjViNjQyMDYxMjViOTI4YWU1ODU4YjVhZGI5ZGU2ZTE3Y2FmMDhjZWMwYmZmMfR6F7Q=: 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.528 05:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.096 nvme0n1 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.096 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.097 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.756 nvme0n1 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.756 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.757 05:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.324 nvme0n1 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.324 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.325 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.583 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk3ZTJjNWI3OTZmZmU1MmQxZGQzZjI5ZjVkMWM4N2FhMzhkNjFmMzhjMGRlNmRjwx+EZA==: 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: ]] 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTE1NWYyMzY5MTU0ZmU4ZWZhZjIwM2U1ODJiYTQ0OGHj1qG4: 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.584 05:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 nvme0n1 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzM3OGY2OGJhNDExMWNlMTVmZjhiMTdhYTBmOWUzNzcxMTZiYzYxOGM3ZWY4NmU3NmJlNTdiNzViODE3MWI3ZqoZ5Lg=: 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.152 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.721 nvme0n1 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.721 request: 00:26:01.721 { 00:26:01.721 "name": "nvme0", 00:26:01.721 "trtype": "tcp", 00:26:01.721 "traddr": "10.0.0.1", 00:26:01.721 "adrfam": "ipv4", 00:26:01.721 "trsvcid": "4420", 00:26:01.721 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.721 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.721 "prchk_reftag": false, 00:26:01.721 "prchk_guard": false, 00:26:01.721 "hdgst": false, 00:26:01.721 "ddgst": false, 00:26:01.721 "allow_unrecognized_csi": false, 00:26:01.721 "method": "bdev_nvme_attach_controller", 00:26:01.721 "req_id": 1 00:26:01.721 } 00:26:01.721 Got JSON-RPC error response 00:26:01.721 response: 00:26:01.721 { 00:26:01.721 "code": -5, 00:26:01.721 "message": "Input/output error" 00:26:01.721 } 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.721 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.980 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.980 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.980 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.980 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.980 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.981 request: 00:26:01.981 { 00:26:01.981 "name": "nvme0", 00:26:01.981 "trtype": "tcp", 00:26:01.981 "traddr": "10.0.0.1", 00:26:01.981 "adrfam": "ipv4", 00:26:01.981 "trsvcid": "4420", 00:26:01.981 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.981 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.981 "prchk_reftag": false, 00:26:01.981 "prchk_guard": false, 00:26:01.981 "hdgst": false, 00:26:01.981 "ddgst": false, 00:26:01.981 "dhchap_key": "key2", 00:26:01.981 "allow_unrecognized_csi": false, 00:26:01.981 "method": "bdev_nvme_attach_controller", 00:26:01.981 "req_id": 1 00:26:01.981 } 00:26:01.981 Got JSON-RPC error response 00:26:01.981 response: 00:26:01.981 { 00:26:01.981 "code": -5, 00:26:01.981 "message": "Input/output error" 00:26:01.981 } 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.981 05:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.981 request: 00:26:01.981 { 00:26:01.981 "name": "nvme0", 00:26:01.981 "trtype": "tcp", 00:26:01.981 "traddr": "10.0.0.1", 00:26:01.981 "adrfam": "ipv4", 00:26:01.981 "trsvcid": "4420", 00:26:01.981 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.981 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.981 "prchk_reftag": false, 00:26:01.981 "prchk_guard": false, 00:26:01.981 "hdgst": false, 00:26:01.981 "ddgst": false, 00:26:01.981 "dhchap_key": "key1", 00:26:01.981 "dhchap_ctrlr_key": "ckey2", 00:26:01.981 "allow_unrecognized_csi": false, 00:26:01.981 "method": "bdev_nvme_attach_controller", 00:26:01.981 "req_id": 1 00:26:01.981 } 00:26:01.981 Got JSON-RPC error response 00:26:01.981 response: 00:26:01.981 { 00:26:01.981 "code": -5, 00:26:01.981 "message": "Input/output error" 00:26:01.981 } 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.981 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.240 nvme0n1 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.240 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.499 request: 00:26:02.499 { 00:26:02.499 "name": "nvme0", 00:26:02.499 "dhchap_key": "key1", 00:26:02.499 "dhchap_ctrlr_key": "ckey2", 00:26:02.499 "method": "bdev_nvme_set_keys", 00:26:02.499 "req_id": 1 00:26:02.499 } 00:26:02.499 Got JSON-RPC error response 00:26:02.499 response: 00:26:02.499 { 00:26:02.499 "code": -13, 00:26:02.499 "message": "Permission denied" 00:26:02.499 } 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:02.499 05:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:03.435 05:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:04.371 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.371 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:04.371 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.371 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.371 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ5MjQwM2RlZDcxNmQyYjYzYzMzM2Q4YWIxZjQwODA2M2JiYmUyOTQ2NmIwYmRhAH8xtA==: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM3OTExYzliNmJmZGIxZTZjMTdkZTc3NzJlZWQ0YWQ2MDEzNTg4NDIwZTdmMjBky7FGng==: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.630 nvme0n1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFmMWMxZTQzMGJkYjUxYTE1ZWM4ODNhZWUyMTc5NDARHNu/: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYwMGE4Nzc1YmYwNGZmMGExNmNiMzgxMWY3ODc5NjGDucKn: 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.630 request: 00:26:04.630 { 00:26:04.630 "name": "nvme0", 00:26:04.630 "dhchap_key": "key2", 00:26:04.630 "dhchap_ctrlr_key": "ckey1", 00:26:04.630 "method": "bdev_nvme_set_keys", 00:26:04.630 "req_id": 1 00:26:04.630 } 00:26:04.630 Got JSON-RPC error response 00:26:04.630 response: 00:26:04.630 { 00:26:04.630 "code": -13, 00:26:04.630 "message": "Permission denied" 00:26:04.630 } 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:04.630 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.889 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.889 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.889 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:04.889 05:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.825 rmmod nvme_tcp 00:26:05.825 rmmod nvme_fabrics 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 755454 ']' 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 755454 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 755454 ']' 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 755454 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755454 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:05.825 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:05.826 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755454' 00:26:05.826 killing process with pid 755454 00:26:05.826 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 755454 00:26:05.826 05:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 755454 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.085 05:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:08.622 05:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:11.157 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:11.157 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:12.095 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:12.095 05:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uj4 /tmp/spdk.key-null.OkL /tmp/spdk.key-sha256.hkY /tmp/spdk.key-sha384.9sh /tmp/spdk.key-sha512.13s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:12.095 05:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:14.631 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:14.631 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:14.631 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:14.890 00:26:14.890 real 0m53.689s 00:26:14.890 user 0m48.421s 00:26:14.890 sys 0m12.631s 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.890 ************************************ 00:26:14.890 END TEST nvmf_auth_host 00:26:14.890 ************************************ 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.890 ************************************ 00:26:14.890 START TEST nvmf_digest 00:26:14.890 ************************************ 00:26:14.890 05:03:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:15.150 * Looking for test storage... 00:26:15.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:15.150 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:15.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.151 --rc genhtml_branch_coverage=1 00:26:15.151 --rc genhtml_function_coverage=1 00:26:15.151 --rc genhtml_legend=1 00:26:15.151 --rc geninfo_all_blocks=1 00:26:15.151 --rc geninfo_unexecuted_blocks=1 00:26:15.151 00:26:15.151 ' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:15.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.151 --rc genhtml_branch_coverage=1 00:26:15.151 --rc genhtml_function_coverage=1 00:26:15.151 --rc genhtml_legend=1 00:26:15.151 --rc geninfo_all_blocks=1 00:26:15.151 --rc geninfo_unexecuted_blocks=1 00:26:15.151 00:26:15.151 ' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:15.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.151 --rc genhtml_branch_coverage=1 00:26:15.151 --rc genhtml_function_coverage=1 00:26:15.151 --rc genhtml_legend=1 00:26:15.151 --rc geninfo_all_blocks=1 00:26:15.151 --rc geninfo_unexecuted_blocks=1 00:26:15.151 00:26:15.151 ' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:15.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.151 --rc genhtml_branch_coverage=1 00:26:15.151 --rc genhtml_function_coverage=1 00:26:15.151 --rc genhtml_legend=1 00:26:15.151 --rc geninfo_all_blocks=1 00:26:15.151 --rc geninfo_unexecuted_blocks=1 00:26:15.151 00:26:15.151 ' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.151 05:03:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.721 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:21.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:21.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:21.722 Found net devices under 0000:af:00.0: cvl_0_0 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:21.722 Found net devices under 0000:af:00.1: cvl_0_1 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:21.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:26:21.722 00:26:21.722 --- 10.0.0.2 ping statistics --- 00:26:21.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.722 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:26:21.722 00:26:21.722 --- 10.0.0.1 ping statistics --- 00:26:21.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.722 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:21.722 05:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.722 ************************************ 00:26:21.722 START TEST nvmf_digest_clean 00:26:21.722 ************************************ 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.722 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=769681 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 769681 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 769681 ']' 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 [2024-12-10 05:03:12.114062] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:21.723 [2024-12-10 05:03:12.114109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.723 [2024-12-10 05:03:12.177394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.723 [2024-12-10 05:03:12.218579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.723 [2024-12-10 05:03:12.218612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.723 [2024-12-10 05:03:12.218619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.723 [2024-12-10 05:03:12.218625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.723 [2024-12-10 05:03:12.218630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.723 [2024-12-10 05:03:12.219096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 null0 00:26:21.723 [2024-12-10 05:03:12.398073] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.723 [2024-12-10 05:03:12.422264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=769708 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 769708 /var/tmp/bperf.sock 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 769708 ']' 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.723 [2024-12-10 05:03:12.473161] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:21.723 [2024-12-10 05:03:12.473206] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769708 ] 00:26:21.723 [2024-12-10 05:03:12.545455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.723 [2024-12-10 05:03:12.585411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:21.723 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.982 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.982 05:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.244 nvme0n1 00:26:22.244 05:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.244 05:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.244 Running I/O for 2 seconds... 00:26:24.558 24604.00 IOPS, 96.11 MiB/s [2024-12-10T04:03:15.695Z] 25469.00 IOPS, 99.49 MiB/s 00:26:24.558 Latency(us) 00:26:24.558 [2024-12-10T04:03:15.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:24.558 nvme0n1 : 2.00 25485.72 99.55 0.00 0.00 5017.65 2293.76 11609.23 00:26:24.558 [2024-12-10T04:03:15.695Z] =================================================================================================================== 00:26:24.558 [2024-12-10T04:03:15.695Z] Total : 25485.72 99.55 0.00 0.00 5017.65 2293.76 11609.23 00:26:24.558 { 00:26:24.558 "results": [ 00:26:24.558 { 00:26:24.558 "job": "nvme0n1", 00:26:24.558 "core_mask": "0x2", 00:26:24.558 "workload": "randread", 00:26:24.558 "status": "finished", 00:26:24.558 "queue_depth": 128, 00:26:24.558 "io_size": 4096, 00:26:24.558 "runtime": 2.003867, 00:26:24.558 "iops": 25485.723353895242, 00:26:24.558 "mibps": 99.55360685115329, 00:26:24.558 "io_failed": 0, 00:26:24.558 "io_timeout": 0, 00:26:24.558 "avg_latency_us": 5017.653965593443, 00:26:24.558 "min_latency_us": 2293.76, 00:26:24.558 "max_latency_us": 11609.234285714285 00:26:24.558 } 00:26:24.558 ], 00:26:24.558 "core_count": 1 00:26:24.558 } 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.558 | select(.opcode=="crc32c") 00:26:24.558 | "\(.module_name) \(.executed)"' 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 769708 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 769708 ']' 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 769708 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769708 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769708' 00:26:24.558 killing process with pid 769708 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 769708 00:26:24.558 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.558 00:26:24.558 Latency(us) 00:26:24.558 [2024-12-10T04:03:15.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.558 [2024-12-10T04:03:15.695Z] =================================================================================================================== 00:26:24.558 [2024-12-10T04:03:15.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.558 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 769708 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=770167 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 770167 /var/tmp/bperf.sock 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 770167 ']' 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.817 05:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 [2024-12-10 05:03:15.861764] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:24.817 [2024-12-10 05:03:15.861810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770167 ] 00:26:24.817 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.817 Zero copy mechanism will not be used. 00:26:24.817 [2024-12-10 05:03:15.937158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.076 [2024-12-10 05:03:15.974526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.076 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.076 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:25.076 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:25.076 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:25.076 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:25.336 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.336 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.594 nvme0n1 00:26:25.594 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:25.594 05:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.853 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.853 Zero copy mechanism will not be used. 00:26:25.853 Running I/O for 2 seconds... 00:26:27.727 6095.00 IOPS, 761.88 MiB/s [2024-12-10T04:03:18.864Z] 6074.50 IOPS, 759.31 MiB/s 00:26:27.727 Latency(us) 00:26:27.727 [2024-12-10T04:03:18.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.727 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:27.727 nvme0n1 : 2.00 6077.60 759.70 0.00 0.00 2629.91 702.17 5804.62 00:26:27.727 [2024-12-10T04:03:18.864Z] =================================================================================================================== 00:26:27.727 [2024-12-10T04:03:18.864Z] Total : 6077.60 759.70 0.00 0.00 2629.91 702.17 5804.62 00:26:27.727 { 00:26:27.727 "results": [ 00:26:27.727 { 00:26:27.727 "job": "nvme0n1", 00:26:27.727 "core_mask": "0x2", 00:26:27.727 "workload": "randread", 00:26:27.727 "status": "finished", 00:26:27.727 "queue_depth": 16, 00:26:27.727 "io_size": 131072, 00:26:27.727 "runtime": 2.001614, 00:26:27.727 "iops": 6077.595380527914, 00:26:27.727 "mibps": 759.6994225659893, 00:26:27.727 "io_failed": 0, 00:26:27.727 "io_timeout": 0, 00:26:27.727 "avg_latency_us": 2629.912426046621, 00:26:27.727 "min_latency_us": 702.1714285714286, 00:26:27.727 "max_latency_us": 5804.617142857142 00:26:27.727 } 00:26:27.727 ], 00:26:27.727 "core_count": 1 00:26:27.727 } 00:26:27.727 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:27.727 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:27.727 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:27.727 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:27.727 | select(.opcode=="crc32c") 00:26:27.727 | "\(.module_name) \(.executed)"' 00:26:27.727 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 770167 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 770167 ']' 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 770167 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.986 05:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770167 00:26:27.986 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.986 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.986 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770167' 00:26:27.986 killing process with pid 770167 00:26:27.986 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 770167 00:26:27.986 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.986 00:26:27.986 Latency(us) 00:26:27.986 [2024-12-10T04:03:19.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.986 [2024-12-10T04:03:19.123Z] =================================================================================================================== 00:26:27.986 [2024-12-10T04:03:19.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.986 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 770167 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=770840 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 770840 /var/tmp/bperf.sock 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 770840 ']' 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.245 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.245 [2024-12-10 05:03:19.242457] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:28.245 [2024-12-10 05:03:19.242504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770840 ] 00:26:28.245 [2024-12-10 05:03:19.317623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.245 [2024-12-10 05:03:19.357891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.505 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.505 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:28.505 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:28.505 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:28.505 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:28.764 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.764 05:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.022 nvme0n1 00:26:29.022 05:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:29.022 05:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.280 Running I/O for 2 seconds... 00:26:31.153 28393.00 IOPS, 110.91 MiB/s [2024-12-10T04:03:22.290Z] 28622.00 IOPS, 111.80 MiB/s 00:26:31.153 Latency(us) 00:26:31.153 [2024-12-10T04:03:22.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.153 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:31.153 nvme0n1 : 2.00 28641.26 111.88 0.00 0.00 4464.30 1747.63 9424.70 00:26:31.153 [2024-12-10T04:03:22.290Z] =================================================================================================================== 00:26:31.153 [2024-12-10T04:03:22.290Z] Total : 28641.26 111.88 0.00 0.00 4464.30 1747.63 9424.70 00:26:31.153 { 00:26:31.153 "results": [ 00:26:31.153 { 00:26:31.153 "job": "nvme0n1", 00:26:31.153 "core_mask": "0x2", 00:26:31.153 "workload": "randwrite", 00:26:31.153 "status": "finished", 00:26:31.153 "queue_depth": 128, 00:26:31.153 "io_size": 4096, 00:26:31.153 "runtime": 2.003124, 00:26:31.153 "iops": 28641.262348212094, 00:26:31.153 "mibps": 111.8799310477035, 00:26:31.153 "io_failed": 0, 00:26:31.153 "io_timeout": 0, 00:26:31.153 "avg_latency_us": 4464.299350006474, 00:26:31.153 "min_latency_us": 1747.6266666666668, 00:26:31.153 "max_latency_us": 9424.700952380952 00:26:31.153 } 00:26:31.153 ], 00:26:31.153 "core_count": 1 00:26:31.153 } 00:26:31.153 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:31.153 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:31.153 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:31.153 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:31.153 | select(.opcode=="crc32c") 00:26:31.153 | "\(.module_name) \(.executed)"' 00:26:31.153 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 770840 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 770840 ']' 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 770840 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770840 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770840' 00:26:31.412 killing process with pid 770840 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 770840 00:26:31.412 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.412 00:26:31.412 Latency(us) 00:26:31.412 [2024-12-10T04:03:22.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.412 [2024-12-10T04:03:22.549Z] =================================================================================================================== 00:26:31.412 [2024-12-10T04:03:22.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.412 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 770840 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=771300 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 771300 /var/tmp/bperf.sock 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 771300 ']' 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.671 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.671 [2024-12-10 05:03:22.691254] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:31.672 [2024-12-10 05:03:22.691304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771300 ] 00:26:31.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.672 Zero copy mechanism will not be used. 00:26:31.672 [2024-12-10 05:03:22.765420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.672 [2024-12-10 05:03:22.801651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.931 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.931 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:31.931 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:31.931 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:31.931 05:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:32.189 05:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.190 05:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.448 nvme0n1 00:26:32.448 05:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:32.448 05:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.707 Zero copy mechanism will not be used. 00:26:32.707 Running I/O for 2 seconds... 00:26:34.579 6367.00 IOPS, 795.88 MiB/s [2024-12-10T04:03:25.716Z] 6614.00 IOPS, 826.75 MiB/s 00:26:34.579 Latency(us) 00:26:34.579 [2024-12-10T04:03:25.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.579 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:34.579 nvme0n1 : 2.00 6611.51 826.44 0.00 0.00 2415.93 1872.46 13044.78 00:26:34.579 [2024-12-10T04:03:25.716Z] =================================================================================================================== 00:26:34.579 [2024-12-10T04:03:25.716Z] Total : 6611.51 826.44 0.00 0.00 2415.93 1872.46 13044.78 00:26:34.579 { 00:26:34.579 "results": [ 00:26:34.579 { 00:26:34.579 "job": "nvme0n1", 00:26:34.579 "core_mask": "0x2", 00:26:34.579 "workload": "randwrite", 00:26:34.579 "status": "finished", 00:26:34.579 "queue_depth": 16, 00:26:34.579 "io_size": 131072, 00:26:34.579 "runtime": 2.003779, 00:26:34.579 "iops": 6611.507556472046, 00:26:34.579 "mibps": 826.4384445590058, 00:26:34.579 "io_failed": 0, 00:26:34.579 "io_timeout": 0, 00:26:34.579 "avg_latency_us": 2415.933894639981, 00:26:34.579 "min_latency_us": 1872.4571428571428, 00:26:34.579 "max_latency_us": 13044.784761904762 00:26:34.579 } 00:26:34.579 ], 00:26:34.579 "core_count": 1 00:26:34.579 } 00:26:34.579 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:34.579 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:34.579 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:34.579 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:34.579 | select(.opcode=="crc32c") 00:26:34.579 | "\(.module_name) \(.executed)"' 00:26:34.579 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 771300 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 771300 ']' 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 771300 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771300 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771300' 00:26:34.837 killing process with pid 771300 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 771300 00:26:34.837 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.837 00:26:34.837 Latency(us) 00:26:34.837 [2024-12-10T04:03:25.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.837 [2024-12-10T04:03:25.974Z] =================================================================================================================== 00:26:34.837 [2024-12-10T04:03:25.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.837 05:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 771300 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 769681 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 769681 ']' 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 769681 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769681 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769681' 00:26:35.096 killing process with pid 769681 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 769681 00:26:35.096 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 769681 00:26:35.356 00:26:35.356 real 0m14.252s 00:26:35.356 user 0m27.410s 00:26:35.356 sys 0m4.545s 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.356 ************************************ 00:26:35.356 END TEST nvmf_digest_clean 00:26:35.356 ************************************ 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.356 ************************************ 00:26:35.356 START TEST nvmf_digest_error 00:26:35.356 ************************************ 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=772001 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 772001 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 772001 ']' 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.356 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.356 [2024-12-10 05:03:26.435286] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:35.356 [2024-12-10 05:03:26.435328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.615 [2024-12-10 05:03:26.512303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.615 [2024-12-10 05:03:26.551191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.615 [2024-12-10 05:03:26.551226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.615 [2024-12-10 05:03:26.551233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.615 [2024-12-10 05:03:26.551239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.615 [2024-12-10 05:03:26.551245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.615 [2024-12-10 05:03:26.551705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.615 [2024-12-10 05:03:26.620145] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.615 null0 00:26:35.615 [2024-12-10 05:03:26.715805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.615 [2024-12-10 05:03:26.739990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:35.615 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=772022 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 772022 /var/tmp/bperf.sock 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 772022 ']' 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.875 05:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.875 [2024-12-10 05:03:26.792276] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:35.875 [2024-12-10 05:03:26.792317] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772022 ] 00:26:35.875 [2024-12-10 05:03:26.865893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.875 [2024-12-10 05:03:26.904699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.875 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.875 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:35.875 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.875 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.134 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.134 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.134 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.134 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.134 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.393 nvme0n1 00:26:36.393 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:36.393 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.393 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.393 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.393 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.393 05:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.653 Running I/O for 2 seconds... 00:26:36.653 [2024-12-10 05:03:27.621673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.621706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.621717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.633235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.633261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.633269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.642217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.642239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.642247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.651814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.651836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.660066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.660087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.670295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.670323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.670332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.680752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.680774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.680782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.690690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.690710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.690719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.700302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.700332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.709706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.709727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.719579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.719600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.719608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.728205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.728226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.728234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.737123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.737144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.737152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.747304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.747326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.747334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.755628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.755648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.755660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.765721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.765741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.765749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.774098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.774119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.774126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.653 [2024-12-10 05:03:27.783569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.653 [2024-12-10 05:03:27.783590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.653 [2024-12-10 05:03:27.783598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.794514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.794534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.794542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.805158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.805183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.805191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.813444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.813464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.813472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.825141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.825162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.825175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.835036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.835057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.835065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.843704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.843724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.843732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.853417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.853438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.853446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.862390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.862410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.862419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.873154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.873179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.873188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.886247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.886268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.886276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.894191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.894211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.894219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.905280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.905301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.905309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.917432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.917453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.917461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.929714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.929734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.940832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.940852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.940860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.949663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.913 [2024-12-10 05:03:27.949683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.913 [2024-12-10 05:03:27.949691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.913 [2024-12-10 05:03:27.961222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:27.961243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:27.961251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:27.969695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:27.969716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:27.969724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:27.981952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:27.981973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:27.981982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:27.994627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:27.994648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:27.994656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:28.003113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:28.003134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:28.003142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:28.014424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:28.014445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:28.014453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:28.022716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:28.022743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:28.022751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:28.034444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:28.034464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:28.034472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.914 [2024-12-10 05:03:28.044508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:36.914 [2024-12-10 05:03:28.044529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.914 [2024-12-10 05:03:28.044538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.055101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.055121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.055129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.063547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.063566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.063574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.075447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.075468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.075476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.087492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.087512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.087520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.099344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.099366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.099374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.106823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.106844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.106853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.117423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.117444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.117452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.127275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.127296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.127304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.137514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.137535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.137543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.145617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.145639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.145647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.158342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.158363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.158371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.168605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.168625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.176724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.176754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.187877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.187898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.187906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.197003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.197024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.197036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.205376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.205397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.205404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.214549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.214569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.214577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.225441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.225461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.225469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.236997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.237017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.237025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.245711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.245731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.245739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.257900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.257921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.257929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.270703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.270724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.270732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.281788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.281807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.281815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.174 [2024-12-10 05:03:28.293283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.174 [2024-12-10 05:03:28.293303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.174 [2024-12-10 05:03:28.293311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.307082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.307104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.307112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.315247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.315267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.315275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.326975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.326995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.327003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.337963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.337983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.337992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.351423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.351444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.351452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.363222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.363243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.363251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.371482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.371503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.371511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.381757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.381778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.434 [2024-12-10 05:03:28.381790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.434 [2024-12-10 05:03:28.393048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.434 [2024-12-10 05:03:28.393069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.393078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.402601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.402623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.402631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.411899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.411921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.420198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.420220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.420228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.430057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.430079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.430087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.438930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.438951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.438959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.450283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.450302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.450311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.460086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.460106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.460114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.468810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.468835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.468842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.481424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.481444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.481452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.493485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.493505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.493514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.503208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.503228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.503237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.511538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.511559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.511567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.521294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.521315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.521323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.530777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.530798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.530805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.539274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.539296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.539304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.549633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.549653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.549662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.435 [2024-12-10 05:03:28.558947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.435 [2024-12-10 05:03:28.558970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.435 [2024-12-10 05:03:28.558978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.567839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.567861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.567869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.577789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.577810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.577818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.587276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.587297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.587306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.596960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.596981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.596990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 24965.00 IOPS, 97.52 MiB/s [2024-12-10T04:03:28.832Z] [2024-12-10 05:03:28.606620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.606645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.606653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.616660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.616681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.616692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.626582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.626603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.626611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.634649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.634670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.634682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.645674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.645696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.645704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.657678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.657700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.657708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.669824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.669846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.669854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.680573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.680594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.680602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.688908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.688930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.688938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.698108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.695 [2024-12-10 05:03:28.698130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.695 [2024-12-10 05:03:28.698138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.695 [2024-12-10 05:03:28.707095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.707117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.707124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.716976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.716997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.717005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.725280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.725301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.725310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.735184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.735205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.735213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.744440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.744461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.744469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.754582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.754604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.754612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.762612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.762634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.762642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.772657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.772679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.772688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.784854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.784875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.784883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.797343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.797364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.797372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.808445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.808466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.808478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.696 [2024-12-10 05:03:28.820260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.696 [2024-12-10 05:03:28.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.696 [2024-12-10 05:03:28.820290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.831397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.831419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.831428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.839552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.839572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.839581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.851878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.851899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.851907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.864285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.864307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.864315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.876644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.876665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.876672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.887783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.887805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.887813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.895428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.895450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.895458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.905370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.905395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.905403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.915692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.915713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.915722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.924751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.924771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.924779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.933534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.933555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.933563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.942101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.942123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.942131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.951330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.951351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.951359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.961342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.961364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.961372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.970124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.970145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.970153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.979950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.979971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.979979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.989056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.989077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.989085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:28.998765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:28.998785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:28.998793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:29.007200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:29.007221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:29.007229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:29.016708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:29.016728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:29.016736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:29.026246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:29.026269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:29.026277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:29.036735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:29.036756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:29.036763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:29.048997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:29.049019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:29.049027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.956 [2024-12-10 05:03:29.057501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.956 [2024-12-10 05:03:29.057522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.956 [2024-12-10 05:03:29.057530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.957 [2024-12-10 05:03:29.070238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.957 [2024-12-10 05:03:29.070259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.957 [2024-12-10 05:03:29.070271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.957 [2024-12-10 05:03:29.082484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:37.957 [2024-12-10 05:03:29.082505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.957 [2024-12-10 05:03:29.082513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.095311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.095332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.216 [2024-12-10 05:03:29.095340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.104707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.104727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.216 [2024-12-10 05:03:29.104735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.113400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.113420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.216 [2024-12-10 05:03:29.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.124810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.124831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.216 [2024-12-10 05:03:29.124839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.133093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.133113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.216 [2024-12-10 05:03:29.133121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.144281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.144301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.216 [2024-12-10 05:03:29.144309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.216 [2024-12-10 05:03:29.152736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.216 [2024-12-10 05:03:29.152756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.152763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.163329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.163349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.163357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.174084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.174104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.174113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.185190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.185210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.185219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.193430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.193451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.193459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.205745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.205766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.205774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.217565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.217587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.217595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.230023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.230044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.230053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.240362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.240382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.240390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.250820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.250841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.250853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.259589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.259609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.259617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.268563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.268583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.268591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.276759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.276779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.276787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.288305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.288326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.288334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.300120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.300141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.300149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.308363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.308383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.308391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.317996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.318017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.318025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.330010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.330030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.330038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.217 [2024-12-10 05:03:29.337536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.217 [2024-12-10 05:03:29.337562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.217 [2024-12-10 05:03:29.337570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.348838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.348870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.348878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.361514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.361535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.361543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.369436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.369456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.369464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.380218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.380239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.380247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.391769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.391790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.391798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.402490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.402510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.402518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.411730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.411751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.411759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.422062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.422083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.422091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.432388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.432408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.432416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.440603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.440625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.440633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.452557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.452578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.452587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.461779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.461799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.461807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.470040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.470061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.470070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.480320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.480341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.480349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.489741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.489762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.489770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.499288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.499309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.477 [2024-12-10 05:03:29.499317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.477 [2024-12-10 05:03:29.507290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.477 [2024-12-10 05:03:29.507310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.507322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.517202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.517222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.517230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.526810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.526831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.526840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.535524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.535544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.535552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.545437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.545458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.545466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.556313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.556333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.556341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.564860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.564880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.564888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.575463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.575483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.575492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.585078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.585098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.593103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.593127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.593135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.478 [2024-12-10 05:03:29.605931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212d590) 00:26:38.478 [2024-12-10 05:03:29.605952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.478 [2024-12-10 05:03:29.605960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.737 25156.50 IOPS, 98.27 MiB/s 00:26:38.737 Latency(us) 00:26:38.737 [2024-12-10T04:03:29.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.737 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:38.737 nvme0n1 : 2.01 25145.67 98.23 0.00 0.00 5083.80 2153.33 18100.42 00:26:38.737 [2024-12-10T04:03:29.874Z] =================================================================================================================== 00:26:38.737 [2024-12-10T04:03:29.874Z] Total : 25145.67 98.23 0.00 0.00 5083.80 2153.33 18100.42 00:26:38.737 { 00:26:38.737 "results": [ 00:26:38.737 { 00:26:38.737 "job": "nvme0n1", 00:26:38.737 "core_mask": "0x2", 00:26:38.737 "workload": "randread", 00:26:38.737 "status": "finished", 00:26:38.737 "queue_depth": 128, 00:26:38.737 "io_size": 4096, 00:26:38.737 "runtime": 2.005952, 00:26:38.737 "iops": 25145.666496506397, 00:26:38.737 "mibps": 98.22525975197811, 00:26:38.737 "io_failed": 0, 00:26:38.737 "io_timeout": 0, 00:26:38.737 "avg_latency_us": 5083.795602858974, 00:26:38.737 "min_latency_us": 2153.325714285714, 00:26:38.737 "max_latency_us": 18100.41904761905 00:26:38.737 } 00:26:38.737 ], 00:26:38.737 "core_count": 1 00:26:38.737 } 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.737 | .driver_specific 00:26:38.737 | .nvme_error 00:26:38.737 | .status_code 00:26:38.737 | .command_transient_transport_error' 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 772022 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 772022 ']' 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 772022 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.737 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772022 00:26:38.997 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.997 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.997 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772022' 00:26:38.997 killing process with pid 772022 00:26:38.997 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 772022 00:26:38.997 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.997 00:26:38.997 Latency(us) 00:26:38.997 [2024-12-10T04:03:30.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.997 [2024-12-10T04:03:30.134Z] =================================================================================================================== 00:26:38.997 [2024-12-10T04:03:30.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.997 05:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 772022 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=772526 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 772526 /var/tmp/bperf.sock 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 772526 ']' 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.997 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.997 [2024-12-10 05:03:30.086309] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:38.997 [2024-12-10 05:03:30.086361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772526 ] 00:26:38.997 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.997 Zero copy mechanism will not be used. 00:26:39.257 [2024-12-10 05:03:30.160158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.257 [2024-12-10 05:03:30.199582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.257 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.257 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:39.257 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.257 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.515 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:39.515 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.515 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.515 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.515 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.515 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.773 nvme0n1 00:26:39.773 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:39.773 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.773 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.773 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.773 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.773 05:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.773 Zero copy mechanism will not be used. 00:26:39.773 Running I/O for 2 seconds... 00:26:40.033 [2024-12-10 05:03:30.910856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.910890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.910901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.916548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.916573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.916582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.923626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.923650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.923659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.931056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.931079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.931088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.937571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.937595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.937603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.943061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.943084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.943096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.948205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.948227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.948235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.953436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.953458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.953466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.958732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.958754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.958762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.964128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.964150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.964158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.969368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.969390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.969398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.974691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.974714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.974722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.980075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.980097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.980106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.985323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.985352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.985360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.990441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.990468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.990477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:30.995380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:30.995402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:30.995410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.000573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.000595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.000603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.005618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.005640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.005648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.010771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.010793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.010801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.015903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.015925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.015933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.020942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.020964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.020972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.026009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.026029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.026037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.031521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.031544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.033 [2024-12-10 05:03:31.031552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.033 [2024-12-10 05:03:31.036804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.033 [2024-12-10 05:03:31.036826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.036834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.042147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.042176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.042185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.047473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.047495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.052694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.052716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.052724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.057898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.057920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.057928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.063301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.063323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.063332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.068587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.068609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.073885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.073908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.073917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.079127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.079150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.079161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.084439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.084461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.084469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.089797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.089820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.089828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.095475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.095497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.095506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.100693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.100715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.100724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.105974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.105996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.106004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.111195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.111216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.111224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.116462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.116484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.116492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.121743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.121767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.121775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.126904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.126932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.126940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.132308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.132332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.132340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.137547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.137570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.137578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.142874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.142898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.142906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.148280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.148302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.148310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.153603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.153625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.153633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.158794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.158816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.158824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.034 [2024-12-10 05:03:31.164093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.034 [2024-12-10 05:03:31.164116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.034 [2024-12-10 05:03:31.164124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.169395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.169418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.169427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.174557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.174580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.174589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.179737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.179759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.179767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.184900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.184922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.184930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.190063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.190085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.190093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.195196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.195217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.195224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.200289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.200310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.200318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.205462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.205483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.205492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.210614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.210635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.210643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.215797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.215822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.215830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.220939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.220959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.220968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.226151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.226178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.226187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.232128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.232151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.232159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.295 [2024-12-10 05:03:31.237406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.295 [2024-12-10 05:03:31.237428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.295 [2024-12-10 05:03:31.237437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.242494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.242516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.242524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.247612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.247634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.247642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.252761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.252784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.252792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.257883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.257905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.257913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.263267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.263289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.263298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.269123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.269145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.269153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.275208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.275230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.275238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.280378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.280400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.280408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.285491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.285520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.290627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.290648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.290656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.294054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.294075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.294083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.297922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.297944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.297953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.302949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.302971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.302983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.308032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.308054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.308063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.313116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.313138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.313146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.318241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.318263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.318271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.323289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.323310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.323318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.328326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.328348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.328356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.333359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.333380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.333388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.339046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.339068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.339076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.344153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.344182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.344206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.349204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.349229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.354221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.354243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.354251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.359237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.359260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.359268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.364211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.364233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.364242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.369195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.369216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.369225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.374125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.374148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.296 [2024-12-10 05:03:31.374156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.296 [2024-12-10 05:03:31.379097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.296 [2024-12-10 05:03:31.379120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.379128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.384232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.384254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.384263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.389337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.389360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.389368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.394549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.394570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.394578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.399737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.399759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.399768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.404945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.404967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.404975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.410072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.410093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.410101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.415390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.415412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.415431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.420589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.420611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.420620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.297 [2024-12-10 05:03:31.425824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.297 [2024-12-10 05:03:31.425845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.297 [2024-12-10 05:03:31.425853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.430998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.431021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.431029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.436189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.436226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.436238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.441451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.441473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.441481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.446685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.446707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.446714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.451850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.451871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.451879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.457000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.457022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.457030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.462135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.462156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.462164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.467263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.467284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.467292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.472392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.472413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.557 [2024-12-10 05:03:31.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.557 [2024-12-10 05:03:31.477506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.557 [2024-12-10 05:03:31.477528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.477536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.482619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.482642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.482651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.487756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.487778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.487786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.492913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.492936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.492944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.498098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.498120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.498129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.503270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.503292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.503300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.508447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.508469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.508478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.513635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.513657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.513665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.518828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.518850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.518858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.524016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.524038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.524050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.529220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.529251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.534400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.534422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.534431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.539603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.539625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.539633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.544766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.544788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.544796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.550000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.550022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.550030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.555202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.555223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.555231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.560274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.560296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.560304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.565450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.565472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.565480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.570627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.570653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.570661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.575722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.575746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.575754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.580870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.580891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.580899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.585971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.585992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.586000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.591112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.591134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.591142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.596280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.596302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.596311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.601441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.601463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.601471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.607562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.607584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.607592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.614871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.614893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.614901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.622075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.558 [2024-12-10 05:03:31.622099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.558 [2024-12-10 05:03:31.622107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.558 [2024-12-10 05:03:31.628905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.628928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.628936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.636209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.636232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.636240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.643507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.643531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.643539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.649807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.649830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.649839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.655881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.655904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.655912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.662684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.662707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.662715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.668850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.668873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.676083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.676107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.676119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.559 [2024-12-10 05:03:31.683494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.559 [2024-12-10 05:03:31.683517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.559 [2024-12-10 05:03:31.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.690939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.690963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.690972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.698653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.698676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.698685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.706034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.706057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.706066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.713108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.713131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.713140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.719175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.719197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.719205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.725467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.725489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.725499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.731784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.731806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.731815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.737598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.737625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.737634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.742801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.742823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.742831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.819 [2024-12-10 05:03:31.748019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.819 [2024-12-10 05:03:31.748041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.819 [2024-12-10 05:03:31.748049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.753217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.753240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.753248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.758450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.758480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.763656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.763677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.763685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.769129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.769159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.774782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.774804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.774813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.780040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.780062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.780069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.785208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.785229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.785237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.790354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.790376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.790384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.795487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.795509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.795517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.800597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.800619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.800627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.805727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.805749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.805757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.810842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.810872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.815943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.815964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.815972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.821130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.821152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.821160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.826251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.826273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.826287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.831393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.831415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.831423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.836524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.836545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.836552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.841631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.841652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.841660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.846763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.846785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.846793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.851855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.851877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.851885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.856989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.857010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.857018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.862140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.862161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.862175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.867267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.867289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.867297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.872852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.872873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.872881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.878546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.878568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.878577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.883750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.883772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.883780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.888925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.888946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.820 [2024-12-10 05:03:31.888954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.820 [2024-12-10 05:03:31.894051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.820 [2024-12-10 05:03:31.894073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.894081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.899146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.899175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.899183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.904247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.904268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.904276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.821 5712.00 IOPS, 714.00 MiB/s [2024-12-10T04:03:31.958Z] [2024-12-10 05:03:31.912457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.912488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.917813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.917834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.917846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.922938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.922960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.922969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.928122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.928145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.928154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.933306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.933329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.933338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.938562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.938584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.938593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.943766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.943796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.821 [2024-12-10 05:03:31.948939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:40.821 [2024-12-10 05:03:31.948961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.821 [2024-12-10 05:03:31.948970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.954208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.954230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.954238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.959369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.959391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.959399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.964502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.964527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.964535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.969701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.969723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.969732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.974855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.974877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.974885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.979995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.980017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.980025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.985117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.985138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.985146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.990263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.990285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.990293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.081 [2024-12-10 05:03:31.995405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.081 [2024-12-10 05:03:31.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.081 [2024-12-10 05:03:31.995436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.000548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.000570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.000578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.005714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.005736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.005744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.010905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.010926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.010934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.016094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.016116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.016124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.021257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.021278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.021286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.026476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.026497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.026506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.031655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.031677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.031685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.036822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.036843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.036852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.042083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.042104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.042112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.047275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.047296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.047304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.052431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.052453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.052464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.057647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.057668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.057676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.062922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.062945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.062954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.068112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.068134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.068143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.073341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.073363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.073372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.078545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.078567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.078575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.083766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.083788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.083796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.089019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.089040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.089049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.094195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.094216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.094224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.099374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.099397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.099405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.104566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.104588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.109747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.109769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.109777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.114903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.114924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.114933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.120068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.120090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.120098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.125259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.125280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.125289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.130435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.130457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.130465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.135625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.135646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.082 [2024-12-10 05:03:32.135654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.082 [2024-12-10 05:03:32.140847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.082 [2024-12-10 05:03:32.140869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.140880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.146091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.146113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.146121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.151240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.151261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.151270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.156402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.156424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.156432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.161507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.161528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.161537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.166641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.166663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.166671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.171758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.171780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.171788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.176917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.176939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.176947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.182118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.182139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.182147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.187356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.187382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.187390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.192548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.192570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.192578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.197814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.197835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.203033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.203054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.203063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.083 [2024-12-10 05:03:32.208517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.083 [2024-12-10 05:03:32.208540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.083 [2024-12-10 05:03:32.208549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.214281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.214304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.214312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.219570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.219592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.219599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.224795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.224825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.230018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.230039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.230047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.235235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.235256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.235264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.240417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.240447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.245551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.245574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.245582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.250741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.250763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.250771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.255925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.255946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.261098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.261120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.261128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.266227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.343 [2024-12-10 05:03:32.266248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.343 [2024-12-10 05:03:32.266256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.343 [2024-12-10 05:03:32.271371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.271393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.271401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.276474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.276496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.276507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.281681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.281702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.281710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.287010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.287032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.287041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.292211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.292232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.292240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.297515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.297537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.297546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.302747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.302769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.302777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.307963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.307983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.307992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.313158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.313187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.313195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.318389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.318411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.318419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.323624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.323649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.323657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.329828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.329850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.329859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.337178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.337200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.337208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.344610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.344633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.344641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.352394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.352417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.352426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.360586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.360609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.360617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.368331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.368354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.368363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.376362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.376385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.376394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.384504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.384527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.384539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.391810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.391832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.391841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.399437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.399461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.399469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.407023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.407046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.407055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.415016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.415038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.415047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.422600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.422624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.422632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.430065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.430088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.430097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.438244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.438270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.438279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.444896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.444920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.444929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.344 [2024-12-10 05:03:32.450829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.344 [2024-12-10 05:03:32.450858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.344 [2024-12-10 05:03:32.450867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.345 [2024-12-10 05:03:32.456630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.345 [2024-12-10 05:03:32.456653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.345 [2024-12-10 05:03:32.456661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.345 [2024-12-10 05:03:32.463305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.345 [2024-12-10 05:03:32.463328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.345 [2024-12-10 05:03:32.463336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.345 [2024-12-10 05:03:32.470904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.345 [2024-12-10 05:03:32.470927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.345 [2024-12-10 05:03:32.470936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.477372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.477394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.477402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.483572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.483596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.483605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.489029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.489051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.489059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.494222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.494244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.494252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.499380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.499401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.499410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.504650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.504672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.504680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.509991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.510014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.510021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.515242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.515263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.515271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.520365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.520388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.520395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.525490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.525512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.525520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.530487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.530509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.530517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.535523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.535545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.535553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.540896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.540918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.540926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.546073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.546095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.546109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.551112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.551133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.551141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.556213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.556234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.556241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.561380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.561410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.566504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.566526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.566534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.571697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.571719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.571728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.576897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.576918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.576926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.582023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.582044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.582052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.587162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.587190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.605 [2024-12-10 05:03:32.587198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.605 [2024-12-10 05:03:32.592542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.605 [2024-12-10 05:03:32.592567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.592575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.597977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.597999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.598007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.603281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.603307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.603315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.608600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.608621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.608630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.613880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.613902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.613910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.619184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.619206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.619214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.624464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.624486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.624494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.629790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.629812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.634487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.634509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.634517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.639570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.639591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.639599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.644473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.644494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.644502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.649822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.649843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.649851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.655396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.655419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.655428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.660835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.660858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.660866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.666328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.666350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.666359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.671713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.671735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.671743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.677068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.677090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.677098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.680587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.680607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.680619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.685048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.685071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.685079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.690441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.690463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.690472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.696096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.696118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.696126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.701458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.701480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.701488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.706840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.706862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.706870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.712574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.712596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.712604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.718216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.718239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.718247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.723564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.723586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.723594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.729129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.729151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.729159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.606 [2024-12-10 05:03:32.734462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.606 [2024-12-10 05:03:32.734484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.606 [2024-12-10 05:03:32.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.739839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.739861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.739869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.745137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.745159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.745173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.750173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.750194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.750203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.755362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.755402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.760469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.760490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.760498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.765525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.765548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.765558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.770640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.770663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.770674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.775904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.775926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.775934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.781153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.781183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.781192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.786380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.786406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.786414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.791614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.791636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.791644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.796786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.796808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.796817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.801980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.802004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.802012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.807248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.807270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.807278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.812567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.812589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.812598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.818015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.818042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.823496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.823519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.823527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.828695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.828717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.828725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.833915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.833938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.833946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.839129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.839151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.839159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.844318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.844340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.844348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.849531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.849553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.854698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.854720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.854729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.859902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.859924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.859932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.865037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.865061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.865069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.870335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.870358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.867 [2024-12-10 05:03:32.870366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.867 [2024-12-10 05:03:32.875598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.867 [2024-12-10 05:03:32.875621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.875629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.868 [2024-12-10 05:03:32.880830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.880854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.880862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.868 [2024-12-10 05:03:32.885741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.885764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.885772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.868 [2024-12-10 05:03:32.890821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.890851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.868 [2024-12-10 05:03:32.895874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.895897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.895904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.868 [2024-12-10 05:03:32.901048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.901071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.901079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.868 [2024-12-10 05:03:32.906243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.906265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.906277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.868 5701.00 IOPS, 712.62 MiB/s [2024-12-10T04:03:33.005Z] [2024-12-10 05:03:32.912192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc0640) 00:26:41.868 [2024-12-10 05:03:32.912214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.868 [2024-12-10 05:03:32.912223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.868 00:26:41.868 Latency(us) 00:26:41.868 [2024-12-10T04:03:33.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.868 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:41.868 nvme0n1 : 2.00 5699.56 712.44 0.00 0.00 2804.06 760.69 9549.53 00:26:41.868 [2024-12-10T04:03:33.005Z] =================================================================================================================== 00:26:41.868 [2024-12-10T04:03:33.005Z] Total : 5699.56 712.44 0.00 0.00 2804.06 760.69 9549.53 00:26:41.868 { 00:26:41.868 "results": [ 00:26:41.868 { 00:26:41.868 "job": "nvme0n1", 00:26:41.868 "core_mask": "0x2", 00:26:41.868 "workload": "randread", 00:26:41.868 "status": "finished", 00:26:41.868 "queue_depth": 16, 00:26:41.868 "io_size": 131072, 00:26:41.868 "runtime": 2.003664, 00:26:41.868 "iops": 5699.5584089947215, 00:26:41.868 "mibps": 712.4448011243402, 00:26:41.868 "io_failed": 0, 00:26:41.868 "io_timeout": 0, 00:26:41.868 "avg_latency_us": 2804.0628124426653, 00:26:41.868 "min_latency_us": 760.6857142857143, 00:26:41.868 "max_latency_us": 9549.531428571428 00:26:41.868 } 00:26:41.868 ], 00:26:41.868 "core_count": 1 00:26:41.868 } 00:26:41.868 05:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.868 05:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.868 05:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.868 | .driver_specific 00:26:41.868 | .nvme_error 00:26:41.868 | .status_code 00:26:41.868 | .command_transient_transport_error' 00:26:41.868 05:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 772526 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 772526 ']' 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 772526 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772526 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772526' 00:26:42.127 killing process with pid 772526 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 772526 00:26:42.127 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.127 00:26:42.127 Latency(us) 00:26:42.127 [2024-12-10T04:03:33.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.127 [2024-12-10T04:03:33.264Z] =================================================================================================================== 00:26:42.127 [2024-12-10T04:03:33.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.127 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 772526 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=773154 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 773154 /var/tmp/bperf.sock 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 773154 ']' 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.386 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.386 [2024-12-10 05:03:33.396870] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:42.386 [2024-12-10 05:03:33.396917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773154 ] 00:26:42.386 [2024-12-10 05:03:33.470959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.386 [2024-12-10 05:03:33.511544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.645 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.645 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:42.645 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.645 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.904 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.904 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.904 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.904 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.904 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.904 05:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.163 nvme0n1 00:26:43.163 05:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:43.163 05:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.163 05:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.163 05:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.163 05:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:43.163 05:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:43.422 Running I/O for 2 seconds... 00:26:43.422 [2024-12-10 05:03:34.325062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7538 00:26:43.422 [2024-12-10 05:03:34.325962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.422 [2024-12-10 05:03:34.325992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.422 [2024-12-10 05:03:34.335120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efc560 00:26:43.422 [2024-12-10 05:03:34.336063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.422 [2024-12-10 05:03:34.336087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.422 [2024-12-10 05:03:34.344100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6890 00:26:43.422 [2024-12-10 05:03:34.345041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.422 [2024-12-10 05:03:34.345062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.422 [2024-12-10 05:03:34.353048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efe2e8 00:26:43.422 [2024-12-10 05:03:34.354053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.422 [2024-12-10 05:03:34.354074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.422 [2024-12-10 05:03:34.361728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd208 00:26:43.423 [2024-12-10 05:03:34.362741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.362761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.370120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6738 00:26:43.423 [2024-12-10 05:03:34.370753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.370773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.379040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8088 00:26:43.423 [2024-12-10 05:03:34.379602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.379625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.388249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee88f8 00:26:43.423 [2024-12-10 05:03:34.389119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.389139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.397644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efef90 00:26:43.423 [2024-12-10 05:03:34.398652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.398672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.405981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7970 00:26:43.423 [2024-12-10 05:03:34.406645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.406666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.414747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8a50 00:26:43.423 [2024-12-10 05:03:34.415380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.423137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee49b0 00:26:43.423 [2024-12-10 05:03:34.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.423792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.434891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8d30 00:26:43.423 [2024-12-10 05:03:34.436152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.436178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.442042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efe2e8 00:26:43.423 [2024-12-10 05:03:34.442914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.442934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.451184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efb480 00:26:43.423 [2024-12-10 05:03:34.452063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.452083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.459938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7da8 00:26:43.423 [2024-12-10 05:03:34.460785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.460805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.469673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef4f40 00:26:43.423 [2024-12-10 05:03:34.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.470339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.478904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee7818 00:26:43.423 [2024-12-10 05:03:34.479793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.479813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.488733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0788 00:26:43.423 [2024-12-10 05:03:34.490190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.490209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.495892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efb480 00:26:43.423 [2024-12-10 05:03:34.496862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.496881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.504944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eed920 00:26:43.423 [2024-12-10 05:03:34.505888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.505908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.513651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efe720 00:26:43.423 [2024-12-10 05:03:34.514564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.514584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.522742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee73e0 00:26:43.423 [2024-12-10 05:03:34.523608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.523628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.532465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee95a0 00:26:43.423 [2024-12-10 05:03:34.533558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.533577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.541489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7970 00:26:43.423 [2024-12-10 05:03:34.542225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.542245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.423 [2024-12-10 05:03:34.550021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efc560 00:26:43.423 [2024-12-10 05:03:34.551375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.423 [2024-12-10 05:03:34.551395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.560093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eed4e8 00:26:43.683 [2024-12-10 05:03:34.561336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.561357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.568561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eed0b0 00:26:43.683 [2024-12-10 05:03:34.569608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.569628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.577077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efe2e8 00:26:43.683 [2024-12-10 05:03:34.578039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.578058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.585994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6cc8 00:26:43.683 [2024-12-10 05:03:34.586515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.586535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.595106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6738 00:26:43.683 [2024-12-10 05:03:34.595915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.595934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.604295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efa7d8 00:26:43.683 [2024-12-10 05:03:34.605118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.605139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.613349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efb8b8 00:26:43.683 [2024-12-10 05:03:34.614173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.614212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.621811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8088 00:26:43.683 [2024-12-10 05:03:34.622631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.622650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.632828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee0630 00:26:43.683 [2024-12-10 05:03:34.633984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.634004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.641461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef92c0 00:26:43.683 [2024-12-10 05:03:34.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.642665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.650029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6890 00:26:43.683 [2024-12-10 05:03:34.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.650886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.659280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef81e0 00:26:43.683 [2024-12-10 05:03:34.659899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.659919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.668677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1b48 00:26:43.683 [2024-12-10 05:03:34.669605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.669625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.677940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eebb98 00:26:43.683 [2024-12-10 05:03:34.678957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.678976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.686390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef31b8 00:26:43.683 [2024-12-10 05:03:34.687363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.683 [2024-12-10 05:03:34.687381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.683 [2024-12-10 05:03:34.695921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee9168 00:26:43.684 [2024-12-10 05:03:34.697061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.697081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.704217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee5ec8 00:26:43.684 [2024-12-10 05:03:34.705033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.705053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.713030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0788 00:26:43.684 [2024-12-10 05:03:34.713852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.713871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.722193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee7818 00:26:43.684 [2024-12-10 05:03:34.722787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.722806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.730228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef5378 00:26:43.684 [2024-12-10 05:03:34.730989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.731008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.739949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2510 00:26:43.684 [2024-12-10 05:03:34.740557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.740577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.750306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2d80 00:26:43.684 [2024-12-10 05:03:34.751678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.751697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.757465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef92c0 00:26:43.684 [2024-12-10 05:03:34.758356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.758374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.767433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8088 00:26:43.684 [2024-12-10 05:03:34.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.768465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.776590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0788 00:26:43.684 [2024-12-10 05:03:34.777726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.777746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.783878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef57b0 00:26:43.684 [2024-12-10 05:03:34.784547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.784567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.793044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eff3c8 00:26:43.684 [2024-12-10 05:03:34.793596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.793615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.802142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6cc8 00:26:43.684 [2024-12-10 05:03:34.802942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.802962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.684 [2024-12-10 05:03:34.811379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd208 00:26:43.684 [2024-12-10 05:03:34.811973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.684 [2024-12-10 05:03:34.811994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.822158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee73e0 00:26:43.944 [2024-12-10 05:03:34.823510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.823530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.830471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee7818 00:26:43.944 [2024-12-10 05:03:34.831518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.831538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.839544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee4140 00:26:43.944 [2024-12-10 05:03:34.840614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.840633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.848648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8a50 00:26:43.944 [2024-12-10 05:03:34.849694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.849717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.857768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efc998 00:26:43.944 [2024-12-10 05:03:34.858805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.858824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.866789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ede470 00:26:43.944 [2024-12-10 05:03:34.867817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.867836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.875665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0ff8 00:26:43.944 [2024-12-10 05:03:34.876672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.876691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.884603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef5378 00:26:43.944 [2024-12-10 05:03:34.885618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.885637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.893571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ede038 00:26:43.944 [2024-12-10 05:03:34.894629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.894647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.901808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edfdc0 00:26:43.944 [2024-12-10 05:03:34.903039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.903058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.910069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edf988 00:26:43.944 [2024-12-10 05:03:34.910742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.910761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.918953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd208 00:26:43.944 [2024-12-10 05:03:34.919652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.927830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efe2e8 00:26:43.944 [2024-12-10 05:03:34.928498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.928517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.936195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee9168 00:26:43.944 [2024-12-10 05:03:34.936845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.936865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.946148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eff3c8 00:26:43.944 [2024-12-10 05:03:34.946957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.946976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.954538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eebfd0 00:26:43.944 [2024-12-10 05:03:34.955329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.955349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.965506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eddc00 00:26:43.944 [2024-12-10 05:03:34.966638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.966657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.973054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee9e10 00:26:43.944 [2024-12-10 05:03:34.973523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.973543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.982214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee38d0 00:26:43.944 [2024-12-10 05:03:34.982988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.983007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:34.991393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef1868 00:26:43.944 [2024-12-10 05:03:34.992068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:34.992088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:35.001625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1f80 00:26:43.944 [2024-12-10 05:03:35.002975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:35.002995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:35.009960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ede470 00:26:43.944 [2024-12-10 05:03:35.010981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:35.011001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.944 [2024-12-10 05:03:35.018755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efa3a0 00:26:43.944 [2024-12-10 05:03:35.019757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.944 [2024-12-10 05:03:35.019777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.945 [2024-12-10 05:03:35.026990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edece0 00:26:43.945 [2024-12-10 05:03:35.028236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.945 [2024-12-10 05:03:35.028255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.945 [2024-12-10 05:03:35.035250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7da8 00:26:43.945 [2024-12-10 05:03:35.035933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.945 [2024-12-10 05:03:35.035952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.945 [2024-12-10 05:03:35.044468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7970 00:26:43.945 [2024-12-10 05:03:35.045254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.945 [2024-12-10 05:03:35.045274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.945 [2024-12-10 05:03:35.053515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eea680 00:26:43.945 [2024-12-10 05:03:35.054351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.945 [2024-12-10 05:03:35.054371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.945 [2024-12-10 05:03:35.061940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eec408 00:26:43.945 [2024-12-10 05:03:35.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.945 [2024-12-10 05:03:35.062724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.945 [2024-12-10 05:03:35.071877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efac10 00:26:43.945 [2024-12-10 05:03:35.072829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.945 [2024-12-10 05:03:35.072848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.081708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd208 00:26:44.204 [2024-12-10 05:03:35.082874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.082896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.090290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eddc00 00:26:44.204 [2024-12-10 05:03:35.091108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.091127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.099184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0ff8 00:26:44.204 [2024-12-10 05:03:35.099991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.100010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.108261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7100 00:26:44.204 [2024-12-10 05:03:35.109047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.109067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.117334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee9e10 00:26:44.204 [2024-12-10 05:03:35.118135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.118154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.126267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8088 00:26:44.204 [2024-12-10 05:03:35.127040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.127058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.135461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee23b8 00:26:44.204 [2024-12-10 05:03:35.136378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.136397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.143749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1f80 00:26:44.204 [2024-12-10 05:03:35.144532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.144551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.152397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef5be8 00:26:44.204 [2024-12-10 05:03:35.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.153172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.162412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7100 00:26:44.204 [2024-12-10 05:03:35.163374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.163393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.171310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2948 00:26:44.204 [2024-12-10 05:03:35.172220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.172239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.180262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef81e0 00:26:44.204 [2024-12-10 05:03:35.181169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.181188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.189469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee5ec8 00:26:44.204 [2024-12-10 05:03:35.190467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.190487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.198812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee27f0 00:26:44.204 [2024-12-10 05:03:35.199927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.199947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.206110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee0630 00:26:44.204 [2024-12-10 05:03:35.206782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.206801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.215028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee5220 00:26:44.204 [2024-12-10 05:03:35.215690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.215709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.223967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee01f8 00:26:44.204 [2024-12-10 05:03:35.224635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.224653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.234060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee5a90 00:26:44.204 [2024-12-10 05:03:35.235181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.235199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.242380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee95a0 00:26:44.204 [2024-12-10 05:03:35.243202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.243221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.251258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6020 00:26:44.204 [2024-12-10 05:03:35.252014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.252034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.260241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eff3c8 00:26:44.204 [2024-12-10 05:03:35.261021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.261040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.270357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2948 00:26:44.204 [2024-12-10 05:03:35.271584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.271603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.278663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6890 00:26:44.204 [2024-12-10 05:03:35.279569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.204 [2024-12-10 05:03:35.279588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.204 [2024-12-10 05:03:35.287451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eed0b0 00:26:44.205 [2024-12-10 05:03:35.288336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.205 [2024-12-10 05:03:35.288355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.205 [2024-12-10 05:03:35.296343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef46d0 00:26:44.205 [2024-12-10 05:03:35.297232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.205 [2024-12-10 05:03:35.297251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.205 [2024-12-10 05:03:35.305298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef1ca0 00:26:44.205 [2024-12-10 05:03:35.306206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.205 [2024-12-10 05:03:35.306226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.205 [2024-12-10 05:03:35.314208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeee38 00:26:44.205 [2024-12-10 05:03:35.316150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.205 [2024-12-10 05:03:35.316176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.205 28267.00 IOPS, 110.42 MiB/s [2024-12-10T04:03:35.342Z] [2024-12-10 05:03:35.323138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee95a0 00:26:44.205 [2024-12-10 05:03:35.324015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.205 [2024-12-10 05:03:35.324035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.205 [2024-12-10 05:03:35.332141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6020 00:26:44.205 [2024-12-10 05:03:35.333080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.205 [2024-12-10 05:03:35.333100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.341293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eff3c8 00:26:44.509 [2024-12-10 05:03:35.342233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.342254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.350491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee88f8 00:26:44.509 [2024-12-10 05:03:35.351410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.351431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.359882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1b48 00:26:44.509 [2024-12-10 05:03:35.360597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.360617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.368327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeea00 00:26:44.509 [2024-12-10 05:03:35.369017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.379023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef4298 00:26:44.509 [2024-12-10 05:03:35.380487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.380508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.385332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eedd58 00:26:44.509 [2024-12-10 05:03:35.385968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.385988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.395537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee4140 00:26:44.509 [2024-12-10 05:03:35.396240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.403810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eee5c8 00:26:44.509 [2024-12-10 05:03:35.404565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.404584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.414245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd640 00:26:44.509 [2024-12-10 05:03:35.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.415632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.420799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee3d08 00:26:44.509 [2024-12-10 05:03:35.421455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.421475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.429985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efa7d8 00:26:44.509 [2024-12-10 05:03:35.430642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.430662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.439193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efda78 00:26:44.509 [2024-12-10 05:03:35.439848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.509 [2024-12-10 05:03:35.439868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.509 [2024-12-10 05:03:35.448878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee88f8 00:26:44.509 [2024-12-10 05:03:35.449771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.449791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.457948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efef90 00:26:44.510 [2024-12-10 05:03:35.458887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.458906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.466461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee3060 00:26:44.510 [2024-12-10 05:03:35.467249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.467268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.476487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee12d8 00:26:44.510 [2024-12-10 05:03:35.477618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.477638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.484875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef3e60 00:26:44.510 [2024-12-10 05:03:35.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.485779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.493729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee4578 00:26:44.510 [2024-12-10 05:03:35.494511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.494531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.502188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee5a90 00:26:44.510 [2024-12-10 05:03:35.502960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.502979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.511242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef96f8 00:26:44.510 [2024-12-10 05:03:35.511924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.511944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.521757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2510 00:26:44.510 [2024-12-10 05:03:35.522917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.531293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef9f68 00:26:44.510 [2024-12-10 05:03:35.532663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.532682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.540625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee73e0 00:26:44.510 [2024-12-10 05:03:35.542103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.542122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.546927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efa7d8 00:26:44.510 [2024-12-10 05:03:35.547583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.547618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.556225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee5658 00:26:44.510 [2024-12-10 05:03:35.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.556697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.565602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edf118 00:26:44.510 [2024-12-10 05:03:35.566163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.566187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.573654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee2c28 00:26:44.510 [2024-12-10 05:03:35.574448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.574467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.583017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8618 00:26:44.510 [2024-12-10 05:03:35.583949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.583969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.510 [2024-12-10 05:03:35.592976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee7c50 00:26:44.510 [2024-12-10 05:03:35.594170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.510 [2024-12-10 05:03:35.594193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.602864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efc128 00:26:44.850 [2024-12-10 05:03:35.603451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.603473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.613831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6cc8 00:26:44.850 [2024-12-10 05:03:35.615390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.615410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.620693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0350 00:26:44.850 [2024-12-10 05:03:35.621419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.621439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.632050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6b70 00:26:44.850 [2024-12-10 05:03:35.633344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.633364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.641193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef1868 00:26:44.850 [2024-12-10 05:03:35.642379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.642399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.649362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee7818 00:26:44.850 [2024-12-10 05:03:35.650509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.650531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.658645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee4140 00:26:44.850 [2024-12-10 05:03:35.659645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.659665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.667902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1f80 00:26:44.850 [2024-12-10 05:03:35.668530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.668550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.676872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd208 00:26:44.850 [2024-12-10 05:03:35.677822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.677842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.686243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6fa8 00:26:44.850 [2024-12-10 05:03:35.687163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.687187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.697285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edf118 00:26:44.850 [2024-12-10 05:03:35.698643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.698663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.706759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2948 00:26:44.850 [2024-12-10 05:03:35.708293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.708313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.713245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6890 00:26:44.850 [2024-12-10 05:03:35.714022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.714042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.724344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6890 00:26:44.850 [2024-12-10 05:03:35.725616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.725636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.732273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efbcf0 00:26:44.850 [2024-12-10 05:03:35.733060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.733079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.850 [2024-12-10 05:03:35.741407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efeb58 00:26:44.850 [2024-12-10 05:03:35.742185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.850 [2024-12-10 05:03:35.742206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.750667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ede8a8 00:26:44.851 [2024-12-10 05:03:35.751457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.751477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.759029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efef90 00:26:44.851 [2024-12-10 05:03:35.759851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.759870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.768399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee7c50 00:26:44.851 [2024-12-10 05:03:35.769325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.769345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.777749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efbcf0 00:26:44.851 [2024-12-10 05:03:35.778734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.778754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.787088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efdeb0 00:26:44.851 [2024-12-10 05:03:35.788190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.788213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.794240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee4578 00:26:44.851 [2024-12-10 05:03:35.794936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.794956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.805150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ede470 00:26:44.851 [2024-12-10 05:03:35.806243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.806263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.815058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1b48 00:26:44.851 [2024-12-10 05:03:35.816505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.816525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.823752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef1868 00:26:44.851 [2024-12-10 05:03:35.825065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.825085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.831193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeaef0 00:26:44.851 [2024-12-10 05:03:35.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.831717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.840053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee01f8 00:26:44.851 [2024-12-10 05:03:35.840837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.840857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.849737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6300 00:26:44.851 [2024-12-10 05:03:35.850861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.860990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eedd58 00:26:44.851 [2024-12-10 05:03:35.862594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.862613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.867708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2510 00:26:44.851 [2024-12-10 05:03:35.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.868632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.878905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee0ea0 00:26:44.851 [2024-12-10 05:03:35.880267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.880287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.886812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6300 00:26:44.851 [2024-12-10 05:03:35.887521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.887540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.896073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef0788 00:26:44.851 [2024-12-10 05:03:35.897094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.897114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.905121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee0ea0 00:26:44.851 [2024-12-10 05:03:35.905819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.905840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.914085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee27f0 00:26:44.851 [2024-12-10 05:03:35.915064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.915085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.923225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8618 00:26:44.851 [2024-12-10 05:03:35.924107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.924127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.933632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6cc8 00:26:44.851 [2024-12-10 05:03:35.934929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.934948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.943277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee23b8 00:26:44.851 [2024-12-10 05:03:35.944769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.944788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.851 [2024-12-10 05:03:35.949755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee3060 00:26:44.851 [2024-12-10 05:03:35.950439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.851 [2024-12-10 05:03:35.950459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.111 [2024-12-10 05:03:35.960154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6300 00:26:45.111 [2024-12-10 05:03:35.961010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.111 [2024-12-10 05:03:35.961030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.111 [2024-12-10 05:03:35.970724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8d30 00:26:45.112 [2024-12-10 05:03:35.971805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:35.971826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:35.980968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efcdd0 00:26:45.112 [2024-12-10 05:03:35.982184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:35.982204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:35.990922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efb480 00:26:45.112 [2024-12-10 05:03:35.992113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:35.992133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:35.999212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eee5c8 00:26:45.112 [2024-12-10 05:03:36.000510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.000530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.007051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee8d30 00:26:45.112 [2024-12-10 05:03:36.007717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.007737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.018276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6b70 00:26:45.112 [2024-12-10 05:03:36.019518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.019537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.026551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef4b08 00:26:45.112 [2024-12-10 05:03:36.027344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.027367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.035613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeaef0 00:26:45.112 [2024-12-10 05:03:36.036649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.036668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.043983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edf988 00:26:45.112 [2024-12-10 05:03:36.045239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.045258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.053844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6300 00:26:45.112 [2024-12-10 05:03:36.054977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.054997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.062503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efc128 00:26:45.112 [2024-12-10 05:03:36.063569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.063589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.071628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efda78 00:26:45.112 [2024-12-10 05:03:36.072729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.079882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eebb98 00:26:45.112 [2024-12-10 05:03:36.080540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.080559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.088951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016edf988 00:26:45.112 [2024-12-10 05:03:36.089853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.089873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.097649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6738 00:26:45.112 [2024-12-10 05:03:36.098486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.098506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.107012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ede038 00:26:45.112 [2024-12-10 05:03:36.107901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.107924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.116502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee6300 00:26:45.112 [2024-12-10 05:03:36.117497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.117516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.125963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efd208 00:26:45.112 [2024-12-10 05:03:36.127122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.127142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.135092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7da8 00:26:45.112 [2024-12-10 05:03:36.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.136126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.142542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef2948 00:26:45.112 [2024-12-10 05:03:36.143278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.143297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.153218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef92c0 00:26:45.112 [2024-12-10 05:03:36.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.154457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.161656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeb328 00:26:45.112 [2024-12-10 05:03:36.162584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.162603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.170675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeee38 00:26:45.112 [2024-12-10 05:03:36.171557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.171577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.180828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eee190 00:26:45.112 [2024-12-10 05:03:36.182160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.182184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.190319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee3060 00:26:45.112 [2024-12-10 05:03:36.191808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.191827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.196801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeb760 00:26:45.112 [2024-12-10 05:03:36.197458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.197477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.207024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee73e0 00:26:45.112 [2024-12-10 05:03:36.208116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.112 [2024-12-10 05:03:36.208135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.112 [2024-12-10 05:03:36.216104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee1b48 00:26:45.112 [2024-12-10 05:03:36.217198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.113 [2024-12-10 05:03:36.217217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.113 [2024-12-10 05:03:36.223997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef7100 00:26:45.113 [2024-12-10 05:03:36.224544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.113 [2024-12-10 05:03:36.224564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.113 [2024-12-10 05:03:36.234115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee95a0 00:26:45.113 [2024-12-10 05:03:36.235197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.113 [2024-12-10 05:03:36.235218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.113 [2024-12-10 05:03:36.243155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016eeea00 00:26:45.372 [2024-12-10 05:03:36.244191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.244211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.252263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee73e0 00:26:45.372 [2024-12-10 05:03:36.253379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.253398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.260596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee9e10 00:26:45.372 [2024-12-10 05:03:36.261259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.261279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.269704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8e88 00:26:45.372 [2024-12-10 05:03:36.270251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.270271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.278680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8e88 00:26:45.372 [2024-12-10 05:03:36.279459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.279478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.288809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef8e88 00:26:45.372 [2024-12-10 05:03:36.290161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.290184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.295266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ee0630 00:26:45.372 [2024-12-10 05:03:36.295891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.295911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.304566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016ef6890 00:26:45.372 [2024-12-10 05:03:36.305312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.305331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.372 [2024-12-10 05:03:36.315562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2390) with pdu=0x200016efc128 00:26:45.372 [2024-12-10 05:03:36.316793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.372 [2024-12-10 05:03:36.316813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.372 28181.00 IOPS, 110.08 MiB/s 00:26:45.372 Latency(us) 00:26:45.372 [2024-12-10T04:03:36.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.372 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:45.372 nvme0n1 : 2.01 28193.47 110.13 0.00 0.00 4534.23 1778.83 12420.63 00:26:45.372 [2024-12-10T04:03:36.509Z] =================================================================================================================== 00:26:45.372 [2024-12-10T04:03:36.509Z] Total : 28193.47 110.13 0.00 0.00 4534.23 1778.83 12420.63 00:26:45.372 { 00:26:45.372 "results": [ 00:26:45.372 { 00:26:45.372 "job": "nvme0n1", 00:26:45.372 "core_mask": "0x2", 00:26:45.372 "workload": "randwrite", 00:26:45.372 "status": "finished", 00:26:45.372 "queue_depth": 128, 00:26:45.372 "io_size": 4096, 00:26:45.372 "runtime": 2.006493, 00:26:45.372 "iops": 28193.469899969747, 00:26:45.372 "mibps": 110.13074179675682, 00:26:45.372 "io_failed": 0, 00:26:45.372 "io_timeout": 0, 00:26:45.372 "avg_latency_us": 4534.2272117982775, 00:26:45.372 "min_latency_us": 1778.8342857142857, 00:26:45.372 "max_latency_us": 12420.63238095238 00:26:45.372 } 00:26:45.372 ], 00:26:45.372 "core_count": 1 00:26:45.372 } 00:26:45.372 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:45.372 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:45.372 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:45.372 | .driver_specific 00:26:45.372 | .nvme_error 00:26:45.372 | .status_code 00:26:45.372 | .command_transient_transport_error' 00:26:45.372 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 773154 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 773154 ']' 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 773154 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773154 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773154' 00:26:45.632 killing process with pid 773154 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 773154 00:26:45.632 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.632 00:26:45.632 Latency(us) 00:26:45.632 [2024-12-10T04:03:36.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.632 [2024-12-10T04:03:36.769Z] =================================================================================================================== 00:26:45.632 [2024-12-10T04:03:36.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 773154 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:45.632 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=773626 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 773626 /var/tmp/bperf.sock 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 773626 ']' 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.891 05:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.891 [2024-12-10 05:03:36.807399] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:45.891 [2024-12-10 05:03:36.807445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773626 ] 00:26:45.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.891 Zero copy mechanism will not be used. 00:26:45.891 [2024-12-10 05:03:36.879715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.891 [2024-12-10 05:03:36.915827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.891 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.891 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:45.891 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.891 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.150 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:46.150 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.150 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.150 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.150 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.150 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.409 nvme0n1 00:26:46.409 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:46.409 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.409 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.409 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.409 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:46.409 05:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.669 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.669 Zero copy mechanism will not be used. 00:26:46.669 Running I/O for 2 seconds... 00:26:46.669 [2024-12-10 05:03:37.586735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.586814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.586846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.593234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.593368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.593393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.599201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.599352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.599375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.605831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.605977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.605998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.612027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.612203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.612224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.618282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.618443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.618463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.624860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.625009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.625028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.631157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.631318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.631338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.637805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.637976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.637995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.645129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.645214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.645234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.651439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.651497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.651516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.657699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.657801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.669 [2024-12-10 05:03:37.664411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.669 [2024-12-10 05:03:37.664487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.669 [2024-12-10 05:03:37.664507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.669664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.669718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.669736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.674787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.674862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.674880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.679490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.679563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.679582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.684139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.684205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.684224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.688506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.688563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.688582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.693231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.693317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.697996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.698058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.698077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.702830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.702892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.702911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.707350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.707411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.707430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.712139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.712206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.712224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.717076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.717134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.717153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.721907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.721960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.721978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.726772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.726829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.726848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.731115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.731186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.731204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.735694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.735828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.740282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.740381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.740399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.745031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.745121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.745140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.750439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.750501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.750519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.756136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.756303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.756321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.763570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.763730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.763749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.770534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.770692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.770710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.777854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.778019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.778038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.785496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.785665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.785683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.670 [2024-12-10 05:03:37.793002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.670 [2024-12-10 05:03:37.793153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.670 [2024-12-10 05:03:37.793178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.800962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.801113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.801132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.808202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.808344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.808363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.814860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.815048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.815069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.821695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.822045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.822067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.828410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.828689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.828709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.835394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.835724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.835745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.842019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.842355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.848756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.849042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.849069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.854815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.855151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.855178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.861501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.861803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.861824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.868354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.868603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.868623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.875576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.875910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.875930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.882902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.883190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.883210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.890012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.890319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.890339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.895978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.896246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.896265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.902107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.902383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.902404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.908000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.908287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.931 [2024-12-10 05:03:37.908307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.931 [2024-12-10 05:03:37.914108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.931 [2024-12-10 05:03:37.914411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.914431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.920356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.920623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.920643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.925650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.925911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.925931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.930207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.930467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.930487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.934645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.934896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.934915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.938734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.938992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.942886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.943148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.943172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.946973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.947241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.947260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.951103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.951378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.951398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.955196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.955466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.955485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.959301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.959575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.959595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.963460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.963716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.963736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.967810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.968062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.972454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.972720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.972740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.977430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.977685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.977705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.982059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.982331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.982351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.986410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.986678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.986702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.990819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.991088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.991108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.995105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.995379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.995399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:37.999135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:37.999409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:37.999429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.003335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.003603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.003623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.007616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.007882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.007901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.012573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.012835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.012855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.017446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.017706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.017726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.021996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.022260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.022280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.026266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.026528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.026548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.030673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.030942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.030962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.035072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.932 [2024-12-10 05:03:38.035336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.932 [2024-12-10 05:03:38.035355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.932 [2024-12-10 05:03:38.039442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.933 [2024-12-10 05:03:38.039709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.933 [2024-12-10 05:03:38.039729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.933 [2024-12-10 05:03:38.043675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.933 [2024-12-10 05:03:38.043941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.933 [2024-12-10 05:03:38.043961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.933 [2024-12-10 05:03:38.047891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.933 [2024-12-10 05:03:38.048154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.933 [2024-12-10 05:03:38.048179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.933 [2024-12-10 05:03:38.052253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.933 [2024-12-10 05:03:38.052518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.933 [2024-12-10 05:03:38.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.933 [2024-12-10 05:03:38.056868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:46.933 [2024-12-10 05:03:38.057124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.933 [2024-12-10 05:03:38.057144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.933 [2024-12-10 05:03:38.062219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.062488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.062508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.066706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.066974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.071116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.071392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.071412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.075524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.075793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.075813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.079778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.080046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.080066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.084100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.084377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.084397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.088455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.088724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.092685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.092953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.092972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.097260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.097539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.097558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.101995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.102279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.102304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.106448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.106722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.106742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.110902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.111185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.115404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.115678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.115698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.119780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.120039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.120059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.124136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.124406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.124426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.128704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.128979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.128999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.134232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.134568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.134588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.140912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.141158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.193 [2024-12-10 05:03:38.146680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.193 [2024-12-10 05:03:38.146957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.193 [2024-12-10 05:03:38.146977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.151818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.152084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.152104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.156889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.157146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.157171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.161656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.161931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.161952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.166750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.167006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.167027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.171458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.171716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.171736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.176716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.176958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.176978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.182256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.182523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.182542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.188081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.188370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.188390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.194218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.194529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.194549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.201152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.201480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.201500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.207178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.207432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.207453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.213120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.213428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.213449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.219546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.219830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.219850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.225510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.225806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.225827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.231675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.231919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.237837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.237937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.237955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.244498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.244749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.244773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.250619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.250889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.250909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.256569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.256839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.256859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.263125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.263362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.269244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.269389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.269407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.275615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.275923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.275943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.282156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.282488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.282508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.288438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.288771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.288791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.294910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.295144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.295165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.300699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.300914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.300934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.305842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.305959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.194 [2024-12-10 05:03:38.305978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.194 [2024-12-10 05:03:38.312183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.194 [2024-12-10 05:03:38.312393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.195 [2024-12-10 05:03:38.312411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.195 [2024-12-10 05:03:38.317407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.195 [2024-12-10 05:03:38.317635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.195 [2024-12-10 05:03:38.317655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.195 [2024-12-10 05:03:38.321820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.195 [2024-12-10 05:03:38.322025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.195 [2024-12-10 05:03:38.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.326033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.326252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.326271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.331084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.331433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.331453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.336523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.336668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.336688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.341673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.342007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.342027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.347408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.347636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.347656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.352495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.352707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.352728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.357690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.357926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.357947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.362816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.363151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.368086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.368324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.368344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.373432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.373718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.373738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.378515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.378713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.378731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.383751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.384002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.455 [2024-12-10 05:03:38.384022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.455 [2024-12-10 05:03:38.389227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.455 [2024-12-10 05:03:38.389490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.389513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.394679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.394926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.394946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.399843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.400031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.400049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.405179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.405389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.405409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.410296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.410585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.410605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.415776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.415959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.415978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.420932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.421209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.421230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.426215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.426505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.426525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.431401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.431596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.431615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.436889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.437109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.437129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.442092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.442291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.442309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.447338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.447623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.447643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.452521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.452684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.452702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.457739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.458045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.458066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.463139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.463441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.463461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.468871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.469051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.469069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.474075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.474257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.474276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.480297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.480484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.480502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.485363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.485550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.485568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.489372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.489559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.489577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.493653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.493818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.493837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.497840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.498032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.498052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.501917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.502101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.502119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.505947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.506135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.506154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.509932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.510143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.510176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.513944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.514178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.514199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.517906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.518097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.518118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.522519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.456 [2024-12-10 05:03:38.522721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.456 [2024-12-10 05:03:38.522739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.456 [2024-12-10 05:03:38.527074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.527258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.527276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.530754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.530933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.530950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.534498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.534672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.534692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.538186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.538370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.538388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.541889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.542074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.542093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.545561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.545729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.545748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.549236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.549425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.549445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.552954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.553138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.553156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.556625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.556808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.556826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.560298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.560468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.560487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.563974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.564171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.564189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.567673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.567845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.567864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.571339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.571514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.571534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.575003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.575191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.575209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.578690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.578874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.578892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.582384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.457 [2024-12-10 05:03:38.582561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.457 [2024-12-10 05:03:38.582581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.457 [2024-12-10 05:03:38.586196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.586368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.586387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.718 5960.00 IOPS, 745.00 MiB/s [2024-12-10T04:03:38.855Z] [2024-12-10 05:03:38.590876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.591057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.591076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.594785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.594950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.594968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.599086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.599294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.603464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.603620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.603642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.607923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.608059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.608079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.612768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.613155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.613181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.617522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.617688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.617708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.621658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.621805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.621827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.625613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.625760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.625778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.629342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.629502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.629520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.633176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.633320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.633338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.637109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.637273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.637291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.640984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.641143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.641161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.644917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.645093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.645111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.648701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.648847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.648865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.652542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.652682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.652700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.656446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.656598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.656616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.660395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.660540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.660559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.664329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.664491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.664509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.668183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.668350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.668368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.718 [2024-12-10 05:03:38.671993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.718 [2024-12-10 05:03:38.672144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.718 [2024-12-10 05:03:38.672162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.675791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.675955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.675973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.679708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.679885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.679903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.683520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.683683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.683702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.687383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.687528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.687546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.691283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.691432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.691450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.695401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.695558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.695577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.699145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.699313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.703147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.703322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.708138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.708296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.708314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.712156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.712329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.712350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.716041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.716206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.716224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.719939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.720088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.720105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.723727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.723884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.723906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.727733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.727892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.727910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.731521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.731683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.731702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.735151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.735325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.735344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.738791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.738965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.742415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.742575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.742593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.746023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.746217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.746235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.749662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.749808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.749826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.753509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.753669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.753689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.757681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.757830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.757849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.762330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.762482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.762500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.766478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.766621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.766640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.771104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.771270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.771289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.775438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.775573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.775592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.779893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.780017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.780035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.719 [2024-12-10 05:03:38.784448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.719 [2024-12-10 05:03:38.784580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.719 [2024-12-10 05:03:38.784598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.789147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.789305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.789323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.793847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.794016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.794034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.798257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.798402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.798421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.802157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.802300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.802321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.806020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.806193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.806211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.809742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.809898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.809916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.813545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.813690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.818221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.818468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.818486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.822197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.822359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.822379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.825814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.825976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.825994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.829445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.829599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.829621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.833038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.833219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.833237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.836623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.836774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.836792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.840414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.840558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.840575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.720 [2024-12-10 05:03:38.844969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.720 [2024-12-10 05:03:38.845152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.720 [2024-12-10 05:03:38.845176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.850566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.850756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.850775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.854875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.855042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.855060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.858842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.858991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.859009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.862846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.863020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.863039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.866868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.867029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.867047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.870653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.870809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.870827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.874570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.874713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.874731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.878963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.879095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.879113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.883564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.883711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.887473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.887585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.887603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.891392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.891567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.891585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.895266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.895449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.895467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.899119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.980 [2024-12-10 05:03:38.899288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.980 [2024-12-10 05:03:38.899307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.980 [2024-12-10 05:03:38.903789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.903894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.903913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.908864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.909025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.909042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.913209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.913349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.913368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.916983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.917136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.917154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.920781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.920936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.920954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.924535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.924697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.924715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.928445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.928580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.928598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.932737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.932865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.932883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.936985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.937131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.937153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.940964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.941118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.941136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.945100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.945235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.945254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.949519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.949677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.949694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.953466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.953615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.953633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.957331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.957484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.957501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.960930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.961090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.961108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.964686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.964820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.964838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.968784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.968945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.972738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.972906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.972924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.976780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.976911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.976930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.982242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.982473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.982493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.987739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.987852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.987871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:38.993359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:38.993610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:38.993630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.000543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:39.000770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:39.000791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.006884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:39.007084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:39.007102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.013057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:39.013227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:39.013248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.019298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:39.019537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:39.019557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.025595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:39.025842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:39.025862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.031729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.981 [2024-12-10 05:03:39.031959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.981 [2024-12-10 05:03:39.031979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.981 [2024-12-10 05:03:39.037947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.038087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.044097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.044371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.044391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.050064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.050223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.050242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.056865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.057053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.057072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.062733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.062957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.062978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.069156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.069312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.069330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.075262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.075428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.075451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.082150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.082251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.082270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.087956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.088036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.088055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.093658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.093764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.093782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.097640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.097774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.097792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.101509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.101651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.101669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.105384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.105505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.982 [2024-12-10 05:03:39.109305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:47.982 [2024-12-10 05:03:39.109447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.982 [2024-12-10 05:03:39.109465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.113338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.113476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.113495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.117249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.117366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.117386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.121138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.121299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.121317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.125343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.125457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.125475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.129194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.129313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.129331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.133113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.133236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.133254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.136997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.137132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.137150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.140899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.141036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.141054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.144684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.144800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.144818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.148535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.148652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.148670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.152260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.152399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.155982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.156106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.156124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.242 [2024-12-10 05:03:39.159975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.242 [2024-12-10 05:03:39.160074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.242 [2024-12-10 05:03:39.160092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.164768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.164947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.164965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.169022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.169153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.169177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.172877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.172998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.173017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.176927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.177078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.177096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.180713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.180819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.180837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.184726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.184819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.184841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.189228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.189330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.189347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.193509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.193626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.193644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.197452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.197573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.197591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.201361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.201495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.201513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.205409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.205538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.205556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.209274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.209411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.209429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.213076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.213194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.213213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.217040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.217126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.221846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.222096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.222115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.226502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.226735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.226756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.230596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.230705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.230723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.234513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.234629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.234648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.238378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.238476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.238495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.242310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.242425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.242443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.246181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.246313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.246331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.250147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.250251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.250269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.254030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.254187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.254205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.257830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.257984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.258002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.261716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.261854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.261872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.266340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.266432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.266450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.270453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.270587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.243 [2024-12-10 05:03:39.270605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.243 [2024-12-10 05:03:39.274226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.243 [2024-12-10 05:03:39.274335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.274353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.277982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.278123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.278141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.281869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.282002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.282019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.285769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.285909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.285927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.289586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.289717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.289739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.293720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.293875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.293893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.298139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.298390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.298410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.303966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.304217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.304236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.309323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.309500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.309517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.316243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.316342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.316361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.322418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.322639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.322658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.328645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.328890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.328909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.335152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.335315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.335333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.341214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.341366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.341385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.347542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.347746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.347765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.354046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.354201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.354219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.360205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.360390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.360408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.366360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.366510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.244 [2024-12-10 05:03:39.373067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.244 [2024-12-10 05:03:39.373285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.244 [2024-12-10 05:03:39.373305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.378525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.378759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.378779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.384014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.384190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.384209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.389344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.389467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.389484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.394106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.394223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.394241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.399053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.399172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.399191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.403790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.403868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.403886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.408256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.408341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.408359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.412681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.412767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.412785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.417410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.417469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.422035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.422142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.422161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.426539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.426605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.426623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.431088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.431177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.431200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.435424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.435503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.435521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.439365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.439477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-12-10 05:03:39.439495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.504 [2024-12-10 05:03:39.443212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.504 [2024-12-10 05:03:39.443320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.447103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.447261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.447279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.450991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.451132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.451150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.454797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.454892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.454910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.458732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.458830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.458848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.462619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.462707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.462726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.466428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.466541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.466558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.470268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.470361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.470379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.474013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.474134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.474152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.478347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.478433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.478452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.483219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.483312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.483329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.487442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.487543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.487561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.493363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.493489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.493507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.499830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.499937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.499955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.505049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.505180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.505198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.509096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.509243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.509262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.513411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.513544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.513562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.518452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.518631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.518649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.523609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.523834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.523854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.528704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.528908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.528927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.533852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.534013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.534031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.539081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.539260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.539278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.544211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.544337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.544355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.549431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.549593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.554917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.555045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.555063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.560148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.560347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.560365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.565674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.565853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.565873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.571124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.571316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.505 [2024-12-10 05:03:39.571334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.505 [2024-12-10 05:03:39.576321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.505 [2024-12-10 05:03:39.576498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.506 [2024-12-10 05:03:39.576515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.506 [2024-12-10 05:03:39.581493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.506 [2024-12-10 05:03:39.581708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.506 [2024-12-10 05:03:39.581728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.506 [2024-12-10 05:03:39.586882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.506 [2024-12-10 05:03:39.587043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.506 [2024-12-10 05:03:39.587060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.506 6439.50 IOPS, 804.94 MiB/s [2024-12-10T04:03:39.643Z] [2024-12-10 05:03:39.593298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b2870) with pdu=0x200016eff3c8 00:26:48.506 [2024-12-10 05:03:39.593469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.506 [2024-12-10 05:03:39.593488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.506 00:26:48.506 Latency(us) 00:26:48.506 [2024-12-10T04:03:39.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.506 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:48.506 nvme0n1 : 2.00 6434.88 804.36 0.00 0.00 2481.53 1497.97 7864.32 00:26:48.506 [2024-12-10T04:03:39.643Z] =================================================================================================================== 00:26:48.506 [2024-12-10T04:03:39.643Z] Total : 6434.88 804.36 0.00 0.00 2481.53 1497.97 7864.32 00:26:48.506 { 00:26:48.506 "results": [ 00:26:48.506 { 00:26:48.506 "job": "nvme0n1", 00:26:48.506 "core_mask": "0x2", 00:26:48.506 "workload": "randwrite", 00:26:48.506 "status": "finished", 00:26:48.506 "queue_depth": 16, 00:26:48.506 "io_size": 131072, 00:26:48.506 "runtime": 2.004699, 00:26:48.506 "iops": 6434.8812465113215, 00:26:48.506 "mibps": 804.3601558139152, 00:26:48.506 "io_failed": 0, 00:26:48.506 "io_timeout": 0, 00:26:48.506 "avg_latency_us": 2481.533703654485, 00:26:48.506 "min_latency_us": 1497.9657142857143, 00:26:48.506 "max_latency_us": 7864.32 00:26:48.506 } 00:26:48.506 ], 00:26:48.506 "core_count": 1 00:26:48.506 } 00:26:48.506 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:48.506 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:48.506 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:48.506 | .driver_specific 00:26:48.506 | .nvme_error 00:26:48.506 | .status_code 00:26:48.506 | .command_transient_transport_error' 00:26:48.506 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 417 > 0 )) 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 773626 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 773626 ']' 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 773626 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773626 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773626' 00:26:48.764 killing process with pid 773626 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 773626 00:26:48.764 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.764 00:26:48.764 Latency(us) 00:26:48.764 [2024-12-10T04:03:39.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.764 [2024-12-10T04:03:39.901Z] =================================================================================================================== 00:26:48.764 [2024-12-10T04:03:39.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.764 05:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 773626 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 772001 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 772001 ']' 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 772001 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772001 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772001' 00:26:49.023 killing process with pid 772001 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 772001 00:26:49.023 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 772001 00:26:49.282 00:26:49.282 real 0m13.884s 00:26:49.282 user 0m26.518s 00:26:49.282 sys 0m4.619s 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.282 ************************************ 00:26:49.282 END TEST nvmf_digest_error 00:26:49.282 ************************************ 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.282 rmmod nvme_tcp 00:26:49.282 rmmod nvme_fabrics 00:26:49.282 rmmod nvme_keyring 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 772001 ']' 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 772001 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 772001 ']' 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 772001 00:26:49.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (772001) - No such process 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 772001 is not found' 00:26:49.282 Process with pid 772001 is not found 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.282 05:03:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.817 00:26:51.817 real 0m36.480s 00:26:51.817 user 0m55.728s 00:26:51.817 sys 0m13.676s 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.817 ************************************ 00:26:51.817 END TEST nvmf_digest 00:26:51.817 ************************************ 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:51.817 05:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.818 ************************************ 00:26:51.818 START TEST nvmf_bdevperf 00:26:51.818 ************************************ 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.818 * Looking for test storage... 00:26:51.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.818 --rc genhtml_branch_coverage=1 00:26:51.818 --rc genhtml_function_coverage=1 00:26:51.818 --rc genhtml_legend=1 00:26:51.818 --rc geninfo_all_blocks=1 00:26:51.818 --rc geninfo_unexecuted_blocks=1 00:26:51.818 00:26:51.818 ' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.818 --rc genhtml_branch_coverage=1 00:26:51.818 --rc genhtml_function_coverage=1 00:26:51.818 --rc genhtml_legend=1 00:26:51.818 --rc geninfo_all_blocks=1 00:26:51.818 --rc geninfo_unexecuted_blocks=1 00:26:51.818 00:26:51.818 ' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.818 --rc genhtml_branch_coverage=1 00:26:51.818 --rc genhtml_function_coverage=1 00:26:51.818 --rc genhtml_legend=1 00:26:51.818 --rc geninfo_all_blocks=1 00:26:51.818 --rc geninfo_unexecuted_blocks=1 00:26:51.818 00:26:51.818 ' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.818 --rc genhtml_branch_coverage=1 00:26:51.818 --rc genhtml_function_coverage=1 00:26:51.818 --rc genhtml_legend=1 00:26:51.818 --rc geninfo_all_blocks=1 00:26:51.818 --rc geninfo_unexecuted_blocks=1 00:26:51.818 00:26:51.818 ' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.818 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.819 05:03:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.389 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:58.390 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:58.390 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:58.390 Found net devices under 0000:af:00.0: cvl_0_0 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:58.390 Found net devices under 0000:af:00.1: cvl_0_1 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:58.390 00:26:58.390 --- 10.0.0.2 ping statistics --- 00:26:58.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.390 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:26:58.390 00:26:58.390 --- 10.0.0.1 ping statistics --- 00:26:58.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.390 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=777694 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 777694 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 777694 ']' 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.390 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.390 [2024-12-10 05:03:48.667021] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:58.390 [2024-12-10 05:03:48.667067] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.390 [2024-12-10 05:03:48.744962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.390 [2024-12-10 05:03:48.785872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.390 [2024-12-10 05:03:48.785907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.390 [2024-12-10 05:03:48.785915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.390 [2024-12-10 05:03:48.785920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.390 [2024-12-10 05:03:48.785926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.391 [2024-12-10 05:03:48.787250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.391 [2024-12-10 05:03:48.787358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.391 [2024-12-10 05:03:48.787359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.391 [2024-12-10 05:03:48.918912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.391 Malloc0 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.391 [2024-12-10 05:03:48.980541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:58.391 { 00:26:58.391 "params": { 00:26:58.391 "name": "Nvme$subsystem", 00:26:58.391 "trtype": "$TEST_TRANSPORT", 00:26:58.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:58.391 "adrfam": "ipv4", 00:26:58.391 "trsvcid": "$NVMF_PORT", 00:26:58.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:58.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:58.391 "hdgst": ${hdgst:-false}, 00:26:58.391 "ddgst": ${ddgst:-false} 00:26:58.391 }, 00:26:58.391 "method": "bdev_nvme_attach_controller" 00:26:58.391 } 00:26:58.391 EOF 00:26:58.391 )") 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:58.391 05:03:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:58.391 "params": { 00:26:58.391 "name": "Nvme1", 00:26:58.391 "trtype": "tcp", 00:26:58.391 "traddr": "10.0.0.2", 00:26:58.391 "adrfam": "ipv4", 00:26:58.391 "trsvcid": "4420", 00:26:58.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:58.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:58.391 "hdgst": false, 00:26:58.391 "ddgst": false 00:26:58.391 }, 00:26:58.391 "method": "bdev_nvme_attach_controller" 00:26:58.391 }' 00:26:58.391 [2024-12-10 05:03:49.034173] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:58.391 [2024-12-10 05:03:49.034214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777800 ] 00:26:58.391 [2024-12-10 05:03:49.108750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.391 [2024-12-10 05:03:49.148046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.391 Running I/O for 1 seconds... 00:26:59.328 11409.00 IOPS, 44.57 MiB/s 00:26:59.328 Latency(us) 00:26:59.328 [2024-12-10T04:03:50.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:59.328 Verification LBA range: start 0x0 length 0x4000 00:26:59.328 Nvme1n1 : 1.01 11495.02 44.90 0.00 0.00 11072.19 1529.17 15166.90 00:26:59.328 [2024-12-10T04:03:50.465Z] =================================================================================================================== 00:26:59.328 [2024-12-10T04:03:50.465Z] Total : 11495.02 44.90 0.00 0.00 11072.19 1529.17 15166.90 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=778025 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.588 { 00:26:59.588 "params": { 00:26:59.588 "name": "Nvme$subsystem", 00:26:59.588 "trtype": "$TEST_TRANSPORT", 00:26:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.588 "adrfam": "ipv4", 00:26:59.588 "trsvcid": "$NVMF_PORT", 00:26:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.588 "hdgst": ${hdgst:-false}, 00:26:59.588 "ddgst": ${ddgst:-false} 00:26:59.588 }, 00:26:59.588 "method": "bdev_nvme_attach_controller" 00:26:59.588 } 00:26:59.588 EOF 00:26:59.588 )") 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:59.588 05:03:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:59.588 "params": { 00:26:59.588 "name": "Nvme1", 00:26:59.588 "trtype": "tcp", 00:26:59.588 "traddr": "10.0.0.2", 00:26:59.588 "adrfam": "ipv4", 00:26:59.588 "trsvcid": "4420", 00:26:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.588 "hdgst": false, 00:26:59.588 "ddgst": false 00:26:59.588 }, 00:26:59.588 "method": "bdev_nvme_attach_controller" 00:26:59.588 }' 00:26:59.588 [2024-12-10 05:03:50.524706] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:26:59.588 [2024-12-10 05:03:50.524752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778025 ] 00:26:59.588 [2024-12-10 05:03:50.597622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.588 [2024-12-10 05:03:50.636666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.847 Running I/O for 15 seconds... 00:27:01.793 11238.00 IOPS, 43.90 MiB/s [2024-12-10T04:03:53.497Z] 11360.00 IOPS, 44.38 MiB/s [2024-12-10T04:03:53.497Z] 05:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 777694 00:27:02.360 05:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:02.621 [2024-12-10 05:03:53.495363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.621 [2024-12-10 05:03:53.495578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.621 [2024-12-10 05:03:53.495587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.622 [2024-12-10 05:03:53.495792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.622 [2024-12-10 05:03:53.495807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.495993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.495999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.622 [2024-12-10 05:03:53.496337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.622 [2024-12-10 05:03:53.496346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.623 [2024-12-10 05:03:53.496606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.623 [2024-12-10 05:03:53.496938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.623 [2024-12-10 05:03:53.496944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.496953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.496959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.496967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.496973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.496981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.496987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.496995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.624 [2024-12-10 05:03:53.497451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705d40 is same with the state(6) to be set 00:27:02.624 [2024-12-10 05:03:53.497466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.624 [2024-12-10 05:03:53.497471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.624 [2024-12-10 05:03:53.497477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104056 len:8 PRP1 0x0 PRP2 0x0 00:27:02.624 [2024-12-10 05:03:53.497484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.624 [2024-12-10 05:03:53.497575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.624 [2024-12-10 05:03:53.497589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.624 [2024-12-10 05:03:53.497603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.624 [2024-12-10 05:03:53.497616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.624 [2024-12-10 05:03:53.497622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.500403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.500428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.500972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.500989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.500998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.501180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.501355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.501364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.501371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.501379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.513627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.513912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.513930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.513938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.514113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.514296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.514306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.514314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.514325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.526567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.526998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.527048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.527073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.527674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.527872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.527882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.527889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.527896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.539433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.539683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.539700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.539708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.539867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.540028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.540037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.540044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.540050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.552220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.552633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.552674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.552700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.553274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.553445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.553455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.553462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.553468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.565004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.565424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.565445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.565453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.565612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.565772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.565781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.565787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.565794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.577751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.578183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.578229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.578254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.578836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.579073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.579082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.579088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.579094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.590505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.590862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.590907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.590930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.591531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.592119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.592145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.592175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.592195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.603346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.603695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.603712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.603719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.603882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.604043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.604053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.625 [2024-12-10 05:03:53.604059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.625 [2024-12-10 05:03:53.604065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.625 [2024-12-10 05:03:53.616097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.625 [2024-12-10 05:03:53.616496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.625 [2024-12-10 05:03:53.616514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.625 [2024-12-10 05:03:53.616521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.625 [2024-12-10 05:03:53.616681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.625 [2024-12-10 05:03:53.616840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.625 [2024-12-10 05:03:53.616850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.616856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.616862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.628875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.629288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.629305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.629312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.629471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.629632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.629642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.629648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.629654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.641719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.642135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.642152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.642159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.642346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.642517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.642531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.642538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.642545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.654582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.655020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.655066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.655089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.655689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.656286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.656312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.656335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.656354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.669763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.670301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.670324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.670335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.670591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.670847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.670860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.670870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.670879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.682892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.683323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.683341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.683349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.683522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.683696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.683706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.683713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.683724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.695736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.696084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.696101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.696109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.696293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.696462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.696472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.696479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.696485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.708588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.709011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.709058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.709083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.709491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.709662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.709671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.709679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.709686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.721516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.721969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.722014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.722038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.626 [2024-12-10 05:03:53.722512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.626 [2024-12-10 05:03:53.722683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.626 [2024-12-10 05:03:53.722693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.626 [2024-12-10 05:03:53.722699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.626 [2024-12-10 05:03:53.722706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.626 [2024-12-10 05:03:53.734271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.626 [2024-12-10 05:03:53.734698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.626 [2024-12-10 05:03:53.734752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.626 [2024-12-10 05:03:53.734776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.627 [2024-12-10 05:03:53.735283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.627 [2024-12-10 05:03:53.735455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.627 [2024-12-10 05:03:53.735465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.627 [2024-12-10 05:03:53.735471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.627 [2024-12-10 05:03:53.735477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.627 [2024-12-10 05:03:53.747061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.627 [2024-12-10 05:03:53.747438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.627 [2024-12-10 05:03:53.747457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.627 [2024-12-10 05:03:53.747465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.627 [2024-12-10 05:03:53.747639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.627 [2024-12-10 05:03:53.747815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.627 [2024-12-10 05:03:53.747825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.627 [2024-12-10 05:03:53.747832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.627 [2024-12-10 05:03:53.747840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.760131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.760504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.760522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.887 [2024-12-10 05:03:53.760531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.887 [2024-12-10 05:03:53.760705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.887 [2024-12-10 05:03:53.760881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.887 [2024-12-10 05:03:53.760891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.887 [2024-12-10 05:03:53.760900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.887 [2024-12-10 05:03:53.760908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.773190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.773537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.773554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.887 [2024-12-10 05:03:53.773562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.887 [2024-12-10 05:03:53.773743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.887 [2024-12-10 05:03:53.773917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.887 [2024-12-10 05:03:53.773926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.887 [2024-12-10 05:03:53.773933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.887 [2024-12-10 05:03:53.773939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.786078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.786498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.786516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.887 [2024-12-10 05:03:53.786523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.887 [2024-12-10 05:03:53.786683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.887 [2024-12-10 05:03:53.786842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.887 [2024-12-10 05:03:53.786852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.887 [2024-12-10 05:03:53.786858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.887 [2024-12-10 05:03:53.786864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.798971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.799386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.799404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.887 [2024-12-10 05:03:53.799411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.887 [2024-12-10 05:03:53.799571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.887 [2024-12-10 05:03:53.799731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.887 [2024-12-10 05:03:53.799741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.887 [2024-12-10 05:03:53.799747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.887 [2024-12-10 05:03:53.799753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.811737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.812159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.812217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.887 [2024-12-10 05:03:53.812241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.887 [2024-12-10 05:03:53.812653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.887 [2024-12-10 05:03:53.812819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.887 [2024-12-10 05:03:53.812833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.887 [2024-12-10 05:03:53.812839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.887 [2024-12-10 05:03:53.812845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.824601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.825025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.825044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.887 [2024-12-10 05:03:53.825051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.887 [2024-12-10 05:03:53.825234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.887 [2024-12-10 05:03:53.825404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.887 [2024-12-10 05:03:53.825414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.887 [2024-12-10 05:03:53.825421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.887 [2024-12-10 05:03:53.825427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.887 [2024-12-10 05:03:53.837405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.887 [2024-12-10 05:03:53.837818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.887 [2024-12-10 05:03:53.837835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.837842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.838002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.838161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.838178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.838185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.838192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.850243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.850649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.850656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.850816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.850977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.850986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.850992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.851001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.863026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.863359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.863376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.863383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.863543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.863703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.863712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.863718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.863725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.875840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.876251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.876290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.876315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.876844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.877005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.877014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.877020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.877026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.888587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.889001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.889019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.889026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.889208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.889381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.889391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.889397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.889404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.901433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.901772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.901792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.901800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.901960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.902121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.902130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.902135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.902142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.914215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.914625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.914664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.914690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.915290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.915879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.915905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.915936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.915943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 9775.00 IOPS, 38.18 MiB/s [2024-12-10T04:03:54.025Z] [2024-12-10 05:03:53.927305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.927702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.927718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.927726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.927886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.928046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.928056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.928062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.928069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.940032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.940447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.940465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.940472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.940636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.940797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.940806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.940812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.940818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.952882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.953315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.953334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.953341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.888 [2024-12-10 05:03:53.953502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.888 [2024-12-10 05:03:53.953662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.888 [2024-12-10 05:03:53.953672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.888 [2024-12-10 05:03:53.953678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.888 [2024-12-10 05:03:53.953684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.888 [2024-12-10 05:03:53.965667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.888 [2024-12-10 05:03:53.966095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.888 [2024-12-10 05:03:53.966145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.888 [2024-12-10 05:03:53.966182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.889 [2024-12-10 05:03:53.966767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.889 [2024-12-10 05:03:53.967280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.889 [2024-12-10 05:03:53.967290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.889 [2024-12-10 05:03:53.967296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.889 [2024-12-10 05:03:53.967302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.889 [2024-12-10 05:03:53.978583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.889 [2024-12-10 05:03:53.978977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.889 [2024-12-10 05:03:53.978995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.889 [2024-12-10 05:03:53.979002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.889 [2024-12-10 05:03:53.979163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.889 [2024-12-10 05:03:53.979329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.889 [2024-12-10 05:03:53.979342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.889 [2024-12-10 05:03:53.979349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.889 [2024-12-10 05:03:53.979355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.889 [2024-12-10 05:03:53.991478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.889 [2024-12-10 05:03:53.991834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.889 [2024-12-10 05:03:53.991879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.889 [2024-12-10 05:03:53.991903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.889 [2024-12-10 05:03:53.992370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.889 [2024-12-10 05:03:53.992532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.889 [2024-12-10 05:03:53.992542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.889 [2024-12-10 05:03:53.992548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.889 [2024-12-10 05:03:53.992554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.889 [2024-12-10 05:03:54.004379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.889 [2024-12-10 05:03:54.004744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.889 [2024-12-10 05:03:54.004761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.889 [2024-12-10 05:03:54.004769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.889 [2024-12-10 05:03:54.004931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.889 [2024-12-10 05:03:54.005091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.889 [2024-12-10 05:03:54.005101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.889 [2024-12-10 05:03:54.005108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.889 [2024-12-10 05:03:54.005116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.889 [2024-12-10 05:03:54.017376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.889 [2024-12-10 05:03:54.017803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.889 [2024-12-10 05:03:54.017821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:02.889 [2024-12-10 05:03:54.017829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:02.889 [2024-12-10 05:03:54.018003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:02.889 [2024-12-10 05:03:54.018184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.889 [2024-12-10 05:03:54.018194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.889 [2024-12-10 05:03:54.018201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.889 [2024-12-10 05:03:54.018214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.149 [2024-12-10 05:03:54.030383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.149 [2024-12-10 05:03:54.030663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.149 [2024-12-10 05:03:54.030681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.149 [2024-12-10 05:03:54.030689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.149 [2024-12-10 05:03:54.030864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.149 [2024-12-10 05:03:54.031040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.149 [2024-12-10 05:03:54.031050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.149 [2024-12-10 05:03:54.031058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.149 [2024-12-10 05:03:54.031065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.149 [2024-12-10 05:03:54.043519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.149 [2024-12-10 05:03:54.043911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.149 [2024-12-10 05:03:54.043956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.149 [2024-12-10 05:03:54.043979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.149 [2024-12-10 05:03:54.044514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.149 [2024-12-10 05:03:54.044686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.149 [2024-12-10 05:03:54.044695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.149 [2024-12-10 05:03:54.044702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.149 [2024-12-10 05:03:54.044709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.149 [2024-12-10 05:03:54.056318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.149 [2024-12-10 05:03:54.056655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.149 [2024-12-10 05:03:54.056673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.056681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.056849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.057019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.057029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.057035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.057042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.069308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.069685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.069703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.069710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.069880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.070048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.070059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.070065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.070071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.082242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.082582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.082600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.082608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.082781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.082956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.082966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.082973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.082979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.095079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.095410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.095428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.095435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.095594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.095754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.095764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.095770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.095776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.107991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.108336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.108354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.108361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.108538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.108699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.108709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.108715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.108721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.120857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.121197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.121243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.121266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.121850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.122272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.122292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.122307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.122320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.135655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.136208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.136254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.136278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.136719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.136976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.136989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.136998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.137009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.148750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.149159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.149183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.149191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.149366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.149541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.149555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.149562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.149569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.161596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.161997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.162031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.162040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.162219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.162393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.162403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.162410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.162417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.174392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.174693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.174700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.174869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.150 [2024-12-10 05:03:54.175039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.150 [2024-12-10 05:03:54.175049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.150 [2024-12-10 05:03:54.175055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.150 [2024-12-10 05:03:54.175062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.150 [2024-12-10 05:03:54.187284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.150 [2024-12-10 05:03:54.187616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.150 [2024-12-10 05:03:54.187633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.150 [2024-12-10 05:03:54.187640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.150 [2024-12-10 05:03:54.187808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.187977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.187987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.187993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.188004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.200140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.200448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.200466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.200473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.200633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.200792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.200802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.200808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.200815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.213159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.213456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.213475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.213483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.213657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.213833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.213843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.213850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.213857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.226079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.226500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.226518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.226526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.226686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.226848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.226857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.226863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.226869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.238930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.239279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.239297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.239304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.239464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.239624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.239633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.239639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.239645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.251780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.252190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.252236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.252259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.252774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.253171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.253192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.253207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.253221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.266657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.267085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.267108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.267119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.267381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.267640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.267654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.267664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.267674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.151 [2024-12-10 05:03:54.279699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.151 [2024-12-10 05:03:54.280046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.151 [2024-12-10 05:03:54.280064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.151 [2024-12-10 05:03:54.280072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.151 [2024-12-10 05:03:54.280255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.151 [2024-12-10 05:03:54.280431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.151 [2024-12-10 05:03:54.280441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.151 [2024-12-10 05:03:54.280448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.151 [2024-12-10 05:03:54.280455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.411 [2024-12-10 05:03:54.292705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.293154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.293211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.293235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.293656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.293827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.293837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.293843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.293850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.305502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.305883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.305902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.305909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.306078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.306254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.306264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.306271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.306277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.318423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.318795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.318813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.318820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.318981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.319141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.319153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.319160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.319170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.331328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.331612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.331630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.331638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.331807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.331978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.331988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.331994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.332000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.344207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.344537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.344554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.344561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.344721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.344881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.344892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.344898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.344904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.357117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.357520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.357537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.357545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.357705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.357865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.357875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.357881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.357891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.369985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.370294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.370312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.370320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.370488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.370658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.370668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.370674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.370681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.382835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.383184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.383202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.383210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.383378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.383549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.383558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.383565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.383572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.395809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.396250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.396268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.396276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.396449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.396609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.396618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.396624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.396630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.408733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.412 [2024-12-10 05:03:54.409151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-12-10 05:03:54.409174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.412 [2024-12-10 05:03:54.409182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.412 [2024-12-10 05:03:54.409342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.412 [2024-12-10 05:03:54.409503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.412 [2024-12-10 05:03:54.409512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.412 [2024-12-10 05:03:54.409518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.412 [2024-12-10 05:03:54.409524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.412 [2024-12-10 05:03:54.421664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.422077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.422094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.422101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.422265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.422427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.422437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.422443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.422449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.434576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.434980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.434997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.435005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.435169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.435353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.435364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.435370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.435376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.447360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.447778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.447822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.447847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.448331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.448503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.448513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.448520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.448526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.460218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.460621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.460638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.460645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.460805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.460966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.460976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.460982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.460988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.473168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.473616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.473684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.474219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.474390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.474398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.474404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.474411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.486050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.486478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.486496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.486504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.486673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.486842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.486855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.486861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.486868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.498884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.499226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.499242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.499251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.499411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.499570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.499578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.499584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.499590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.511725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.512150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.512208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.512232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.512815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.513335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.513345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.513351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.513358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.524620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.525063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.525081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.525089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.525264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.525435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.525445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.525452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.525464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.413 [2024-12-10 05:03:54.537663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.413 [2024-12-10 05:03:54.538083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-12-10 05:03:54.538102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.413 [2024-12-10 05:03:54.538109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.413 [2024-12-10 05:03:54.538291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.413 [2024-12-10 05:03:54.538465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.413 [2024-12-10 05:03:54.538475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.413 [2024-12-10 05:03:54.538482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.413 [2024-12-10 05:03:54.538488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.550681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.551092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.551109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.551117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.551304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.551474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.551483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.551490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.551496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.563463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.563870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.563907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.563932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.564503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.564674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.564682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.564689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.564695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.576310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.576655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.576674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.576681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.576840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.577001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.577011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.577017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.577023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.589099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.589519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.589537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.589544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.589705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.589865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.589875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.589881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.589887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.601868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.602274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.602293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.602300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.602460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.602621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.602631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.602637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.602643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.614671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.615090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.615108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.615115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.615304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.615474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.615484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.615490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.615497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.627481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.627879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.627924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.627948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.628394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.628566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.628575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.628582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.674 [2024-12-10 05:03:54.628589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.674 [2024-12-10 05:03:54.640272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.674 [2024-12-10 05:03:54.640610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.674 [2024-12-10 05:03:54.640627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.674 [2024-12-10 05:03:54.640634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.674 [2024-12-10 05:03:54.640794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.674 [2024-12-10 05:03:54.640955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.674 [2024-12-10 05:03:54.640964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.674 [2024-12-10 05:03:54.640971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.640977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.653072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.653494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.653539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.653563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.654146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.654689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.654702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.654709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.654716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.665945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.666355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.666373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.666381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.666540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.666702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.666711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.666717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.666723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.678678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.679092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.679108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.679115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.679302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.679479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.679489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.679495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.679501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.691521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.691950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.691994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.692018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.692575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.692745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.692755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.692763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.692774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.704354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.704772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.704789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.704796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.704956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.705117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.705126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.705132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.705138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.717204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.717554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.717572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.717579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.717739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.717900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.717909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.717915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.717922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.730015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.730379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.730397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.730406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.730577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.730746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.730756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.730763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.730769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.742818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.743242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.743297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.743322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.743726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.743896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.743906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.743912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.743919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.755591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.755937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.755974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.756001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.756598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.757133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.757142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.757149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.757155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.768581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.675 [2024-12-10 05:03:54.768984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.675 [2024-12-10 05:03:54.769001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.675 [2024-12-10 05:03:54.769008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.675 [2024-12-10 05:03:54.769184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.675 [2024-12-10 05:03:54.769354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.675 [2024-12-10 05:03:54.769363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.675 [2024-12-10 05:03:54.769370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.675 [2024-12-10 05:03:54.769376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.675 [2024-12-10 05:03:54.781380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.676 [2024-12-10 05:03:54.781811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.676 [2024-12-10 05:03:54.781829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.676 [2024-12-10 05:03:54.781836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.676 [2024-12-10 05:03:54.782009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.676 [2024-12-10 05:03:54.782187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.676 [2024-12-10 05:03:54.782197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.676 [2024-12-10 05:03:54.782204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.676 [2024-12-10 05:03:54.782211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.676 [2024-12-10 05:03:54.794459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.676 [2024-12-10 05:03:54.794730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.676 [2024-12-10 05:03:54.794748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.676 [2024-12-10 05:03:54.794757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.676 [2024-12-10 05:03:54.794932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.676 [2024-12-10 05:03:54.795114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.676 [2024-12-10 05:03:54.795124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.676 [2024-12-10 05:03:54.795130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.676 [2024-12-10 05:03:54.795137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.807577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.807994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.808011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.808018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.808184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.808368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.808378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.936 [2024-12-10 05:03:54.808385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.936 [2024-12-10 05:03:54.808391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.820680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.821077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.821094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.821102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.821288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.821459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.821471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.936 [2024-12-10 05:03:54.821478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.936 [2024-12-10 05:03:54.821485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.833490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.833910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.833955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.833979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.834372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.834534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.834543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.936 [2024-12-10 05:03:54.834549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.936 [2024-12-10 05:03:54.834556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.846243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.846634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.846651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.846658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.846818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.846978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.846987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.936 [2024-12-10 05:03:54.846993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.936 [2024-12-10 05:03:54.846999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.859130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.859500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.859518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.859525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.859694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.859865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.859875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.936 [2024-12-10 05:03:54.859881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.936 [2024-12-10 05:03:54.859892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.872091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.872496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.872513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.872521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.872681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.872842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.872851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.936 [2024-12-10 05:03:54.872857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.936 [2024-12-10 05:03:54.872864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.936 [2024-12-10 05:03:54.884933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.936 [2024-12-10 05:03:54.885350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.936 [2024-12-10 05:03:54.885400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.936 [2024-12-10 05:03:54.885425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.936 [2024-12-10 05:03:54.885960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.936 [2024-12-10 05:03:54.886121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.936 [2024-12-10 05:03:54.886131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.886137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.886143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.897706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.898136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.898216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.898799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.899384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.899403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.899418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.899432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.912788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.913292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.913320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.913331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.913587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.913844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.913856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.913866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.913876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.925794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.926225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.926242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.926250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 7331.25 IOPS, 28.64 MiB/s [2024-12-10T04:03:55.074Z] [2024-12-10 05:03:54.927641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.927811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.927820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.927826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.927832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.938621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.938981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.939027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.939051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.939541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.939712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.939722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.939728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.939735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.951458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.951889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.951897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.952061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.952243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.952253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.952259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.952266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.964209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.964622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.964639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.964646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.964807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.964969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.964978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.964984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.964990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.977071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.977495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.977513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.977521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.977690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.977859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.977869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.977875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.977882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:54.989937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:54.990360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:54.990406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:54.990430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:54.991013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:54.991606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:54.991619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:54.991626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:54.991632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:55.002848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:55.003210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:55.003255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:55.003279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:55.003715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:55.003879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:55.003889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:55.003895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:55.003902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.937 [2024-12-10 05:03:55.015605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.937 [2024-12-10 05:03:55.016022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.937 [2024-12-10 05:03:55.016039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.937 [2024-12-10 05:03:55.016046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.937 [2024-12-10 05:03:55.016211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.937 [2024-12-10 05:03:55.016397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.937 [2024-12-10 05:03:55.016406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.937 [2024-12-10 05:03:55.016413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.937 [2024-12-10 05:03:55.016420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.938 [2024-12-10 05:03:55.028575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.938 [2024-12-10 05:03:55.028905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.938 [2024-12-10 05:03:55.028923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.938 [2024-12-10 05:03:55.028930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.938 [2024-12-10 05:03:55.029099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.938 [2024-12-10 05:03:55.029274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.938 [2024-12-10 05:03:55.029284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.938 [2024-12-10 05:03:55.029291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.938 [2024-12-10 05:03:55.029303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.938 [2024-12-10 05:03:55.041412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.938 [2024-12-10 05:03:55.041856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.938 [2024-12-10 05:03:55.041874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.938 [2024-12-10 05:03:55.041882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.938 [2024-12-10 05:03:55.042051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.938 [2024-12-10 05:03:55.042226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.938 [2024-12-10 05:03:55.042237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.938 [2024-12-10 05:03:55.042244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.938 [2024-12-10 05:03:55.042252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.938 [2024-12-10 05:03:55.054477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.938 [2024-12-10 05:03:55.054910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.938 [2024-12-10 05:03:55.054928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:03.938 [2024-12-10 05:03:55.054936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:03.938 [2024-12-10 05:03:55.055119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:03.938 [2024-12-10 05:03:55.055294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.938 [2024-12-10 05:03:55.055304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.938 [2024-12-10 05:03:55.055311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.938 [2024-12-10 05:03:55.055317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.197 [2024-12-10 05:03:55.067594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.197 [2024-12-10 05:03:55.068024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.197 [2024-12-10 05:03:55.068070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.197 [2024-12-10 05:03:55.068094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.197 [2024-12-10 05:03:55.068647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.197 [2024-12-10 05:03:55.068823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.197 [2024-12-10 05:03:55.068833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.197 [2024-12-10 05:03:55.068840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.197 [2024-12-10 05:03:55.068847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.197 [2024-12-10 05:03:55.080525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.197 [2024-12-10 05:03:55.080942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.197 [2024-12-10 05:03:55.080958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.197 [2024-12-10 05:03:55.080966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.197 [2024-12-10 05:03:55.081125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.197 [2024-12-10 05:03:55.081311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.197 [2024-12-10 05:03:55.081321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.197 [2024-12-10 05:03:55.081328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.197 [2024-12-10 05:03:55.081334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.197 [2024-12-10 05:03:55.093389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.197 [2024-12-10 05:03:55.093733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.197 [2024-12-10 05:03:55.093750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.197 [2024-12-10 05:03:55.093757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.197 [2024-12-10 05:03:55.093917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.197 [2024-12-10 05:03:55.094077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.197 [2024-12-10 05:03:55.094086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.197 [2024-12-10 05:03:55.094092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.197 [2024-12-10 05:03:55.094099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.197 [2024-12-10 05:03:55.106300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.197 [2024-12-10 05:03:55.106732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.197 [2024-12-10 05:03:55.106748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.197 [2024-12-10 05:03:55.106756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.197 [2024-12-10 05:03:55.106916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.197 [2024-12-10 05:03:55.107076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.197 [2024-12-10 05:03:55.107085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.197 [2024-12-10 05:03:55.107092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.197 [2024-12-10 05:03:55.107099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.119263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.119533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.119550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.119557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.119721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.119881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.119891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.119897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.119903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.132185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.132475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.132491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.132499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.132660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.132820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.132829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.132835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.132842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.145111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.145461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.145517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.145540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.146073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.146257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.146267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.146274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.146280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.157934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.158329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.158347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.158355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.158926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.159097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.159110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.159117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.159123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.170721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.171114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.171131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.171139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.171326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.171496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.171506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.171512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.171519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.183565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.183980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.183997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.184005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.184164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.184373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.184383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.184390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.184397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.196369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.196782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.196799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.196807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.196967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.197127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.197136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.197142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.197152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.209170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.209589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.209633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.209657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.210125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.210313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.210323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.210330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.210336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.221935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.222349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.222366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.222374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.222534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.222694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.222703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.222709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.222716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.234787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.235207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.198 [2024-12-10 05:03:55.235225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.198 [2024-12-10 05:03:55.235233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.198 [2024-12-10 05:03:55.235392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.198 [2024-12-10 05:03:55.235553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.198 [2024-12-10 05:03:55.235562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.198 [2024-12-10 05:03:55.235568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.198 [2024-12-10 05:03:55.235574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.198 [2024-12-10 05:03:55.247668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.198 [2024-12-10 05:03:55.248002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.248019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.248028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.248194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.248378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.248388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.248394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.248401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.199 [2024-12-10 05:03:55.260527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.199 [2024-12-10 05:03:55.260930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.260974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.260997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.261415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.261577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.261586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.261592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.261599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.199 [2024-12-10 05:03:55.273370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.199 [2024-12-10 05:03:55.273776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.273794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.273801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.273960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.274121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.274130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.274136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.274143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.199 [2024-12-10 05:03:55.286269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.199 [2024-12-10 05:03:55.286693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.286737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.286760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.287293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.287464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.287474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.287480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.287487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.199 [2024-12-10 05:03:55.299065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.199 [2024-12-10 05:03:55.299417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.299434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.299442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.299601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.299763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.299773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.299780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.299786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.199 [2024-12-10 05:03:55.312138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.199 [2024-12-10 05:03:55.312503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.312522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.312530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.312704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.312881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.312891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.312898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.312904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.199 [2024-12-10 05:03:55.325030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.199 [2024-12-10 05:03:55.325463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.199 [2024-12-10 05:03:55.325481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.199 [2024-12-10 05:03:55.325489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.199 [2024-12-10 05:03:55.325662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.199 [2024-12-10 05:03:55.325837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.199 [2024-12-10 05:03:55.325849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.199 [2024-12-10 05:03:55.325856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.199 [2024-12-10 05:03:55.325863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.459 [2024-12-10 05:03:55.337844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.459 [2024-12-10 05:03:55.338269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.459 [2024-12-10 05:03:55.338315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.459 [2024-12-10 05:03:55.338339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.459 [2024-12-10 05:03:55.338924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.459 [2024-12-10 05:03:55.339218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.459 [2024-12-10 05:03:55.339228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.459 [2024-12-10 05:03:55.339235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.459 [2024-12-10 05:03:55.339241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.459 [2024-12-10 05:03:55.350696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.459 [2024-12-10 05:03:55.351113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.459 [2024-12-10 05:03:55.351130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.459 [2024-12-10 05:03:55.351137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.459 [2024-12-10 05:03:55.351322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.459 [2024-12-10 05:03:55.351492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.459 [2024-12-10 05:03:55.351502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.459 [2024-12-10 05:03:55.351509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.459 [2024-12-10 05:03:55.351515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.459 [2024-12-10 05:03:55.363429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.459 [2024-12-10 05:03:55.363747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.459 [2024-12-10 05:03:55.363764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.459 [2024-12-10 05:03:55.363772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.459 [2024-12-10 05:03:55.363932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.459 [2024-12-10 05:03:55.364091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.459 [2024-12-10 05:03:55.364100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.459 [2024-12-10 05:03:55.364106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.459 [2024-12-10 05:03:55.364116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.459 [2024-12-10 05:03:55.376234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.459 [2024-12-10 05:03:55.376645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.459 [2024-12-10 05:03:55.376662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.459 [2024-12-10 05:03:55.376669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.459 [2024-12-10 05:03:55.376828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.459 [2024-12-10 05:03:55.376989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.459 [2024-12-10 05:03:55.376998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.459 [2024-12-10 05:03:55.377004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.459 [2024-12-10 05:03:55.377010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.459 [2024-12-10 05:03:55.389106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.459 [2024-12-10 05:03:55.389530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.459 [2024-12-10 05:03:55.389575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.459 [2024-12-10 05:03:55.389598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.459 [2024-12-10 05:03:55.390194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.459 [2024-12-10 05:03:55.390669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.459 [2024-12-10 05:03:55.390678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.459 [2024-12-10 05:03:55.390684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.459 [2024-12-10 05:03:55.390691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.459 [2024-12-10 05:03:55.402009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.459 [2024-12-10 05:03:55.402323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.459 [2024-12-10 05:03:55.402341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.459 [2024-12-10 05:03:55.402349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.459 [2024-12-10 05:03:55.402508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.459 [2024-12-10 05:03:55.402668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.459 [2024-12-10 05:03:55.402678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.402685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.402691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.414872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.415222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.415239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.415246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.415406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.415566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.415575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.415581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.415587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.427757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.428097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.428115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.428122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.428287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.428448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.428458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.428464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.428470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.440811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.441224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.441271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.441295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.441561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.441735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.441745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.441752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.441759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.453691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.454036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.454054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.454061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.454246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.454416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.454426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.454432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.454439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.466626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.466895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.466912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.466919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.467079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.467244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.467255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.467261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.467267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.479476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.479761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.479778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.479785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.479944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.480105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.480114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.480120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.480126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.492275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.492682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.460 [2024-12-10 05:03:55.492699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.460 [2024-12-10 05:03:55.492707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.460 [2024-12-10 05:03:55.492876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.460 [2024-12-10 05:03:55.493045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.460 [2024-12-10 05:03:55.493057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.460 [2024-12-10 05:03:55.493064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.460 [2024-12-10 05:03:55.493070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.460 [2024-12-10 05:03:55.505084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.460 [2024-12-10 05:03:55.505437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.505480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.505503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.505968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.506130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.506140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.506146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.506152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.461 [2024-12-10 05:03:55.518064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.461 [2024-12-10 05:03:55.518389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.518407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.518415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.518576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.518744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.518754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.518760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.518767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.461 [2024-12-10 05:03:55.530988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.461 [2024-12-10 05:03:55.531352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.531379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.531551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.531714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.531724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.531730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.531740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.461 [2024-12-10 05:03:55.543803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.461 [2024-12-10 05:03:55.544221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.544240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.544247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.544423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.544582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.544591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.544598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.544604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.461 [2024-12-10 05:03:55.556705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.461 [2024-12-10 05:03:55.557090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.557134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.557159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.557760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.558157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.558172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.558179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.558185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.461 [2024-12-10 05:03:55.569837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.461 [2024-12-10 05:03:55.570120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.570138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.570147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.570327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.570502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.570512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.570518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.570524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.461 [2024-12-10 05:03:55.582973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.461 [2024-12-10 05:03:55.583342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.461 [2024-12-10 05:03:55.583360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.461 [2024-12-10 05:03:55.583368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.461 [2024-12-10 05:03:55.583542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.461 [2024-12-10 05:03:55.583715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.461 [2024-12-10 05:03:55.583726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.461 [2024-12-10 05:03:55.583732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.461 [2024-12-10 05:03:55.583739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.596002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.596365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.596383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.596391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.596565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.596740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.596750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.596756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.596764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.609016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.609302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.609321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.609328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.609503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.609679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.609689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.609696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.609702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.621985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.622396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.622416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.622424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.622597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.622767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.622776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.622783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.622789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.634863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.635217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.635236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.635243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.635412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.635581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.635590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.635597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.635603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.647726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.648081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.648099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.648106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.648280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.648450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.648460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.648466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.648473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.660608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.660992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.661036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.661060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.661659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.662206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.662220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.662227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.662233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.673490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.673903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.722 [2024-12-10 05:03:55.673920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.722 [2024-12-10 05:03:55.673927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.722 [2024-12-10 05:03:55.674087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.722 [2024-12-10 05:03:55.674252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.722 [2024-12-10 05:03:55.674262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.722 [2024-12-10 05:03:55.674268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.722 [2024-12-10 05:03:55.674274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.722 [2024-12-10 05:03:55.686509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.722 [2024-12-10 05:03:55.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.686872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.686880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.687039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.687233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.687244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.687250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.687256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.699413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.699682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.699700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.699707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.699875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.700046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.700055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.700062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.700072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.712304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.712612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.712629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.712636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.712796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.712956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.712966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.712972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.712979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.725111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.725387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.725405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.725412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.725572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.725732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.725742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.725748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.725754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.738023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.738321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.738340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.738347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.738506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.738666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.738676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.738682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.738688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.751016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.751366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.751383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.751390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.751550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.751710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.751720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.751726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.751732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.763843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.764118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.764135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.764143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.764307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.764468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.764478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.764484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.764490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.776752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.777115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.777133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.777140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.777303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.777486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.777496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.777502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.777509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.789630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.789968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.790012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.790036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.790514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.790677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.790687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.790693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.790699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.802521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.802802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.802819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.802826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.723 [2024-12-10 05:03:55.802986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.723 [2024-12-10 05:03:55.803147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.723 [2024-12-10 05:03:55.803157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.723 [2024-12-10 05:03:55.803163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.723 [2024-12-10 05:03:55.803174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.723 [2024-12-10 05:03:55.815316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.723 [2024-12-10 05:03:55.815658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.723 [2024-12-10 05:03:55.815676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.723 [2024-12-10 05:03:55.815685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.724 [2024-12-10 05:03:55.815855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.724 [2024-12-10 05:03:55.816025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.724 [2024-12-10 05:03:55.816034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.724 [2024-12-10 05:03:55.816040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.724 [2024-12-10 05:03:55.816047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.724 [2024-12-10 05:03:55.828330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.724 [2024-12-10 05:03:55.828743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.724 [2024-12-10 05:03:55.828762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.724 [2024-12-10 05:03:55.828770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.724 [2024-12-10 05:03:55.828944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.724 [2024-12-10 05:03:55.829120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.724 [2024-12-10 05:03:55.829134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.724 [2024-12-10 05:03:55.829141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.724 [2024-12-10 05:03:55.829149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.724 [2024-12-10 05:03:55.841219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.724 [2024-12-10 05:03:55.841606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.724 [2024-12-10 05:03:55.841623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.724 [2024-12-10 05:03:55.841630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.724 [2024-12-10 05:03:55.841790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.724 [2024-12-10 05:03:55.841950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.724 [2024-12-10 05:03:55.841960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.724 [2024-12-10 05:03:55.841966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.724 [2024-12-10 05:03:55.841972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.984 [2024-12-10 05:03:55.854350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.984 [2024-12-10 05:03:55.854791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-12-10 05:03:55.854835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.984 [2024-12-10 05:03:55.854859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.984 [2024-12-10 05:03:55.855471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.984 [2024-12-10 05:03:55.855642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.984 [2024-12-10 05:03:55.855652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.984 [2024-12-10 05:03:55.855658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.984 [2024-12-10 05:03:55.855665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.984 [2024-12-10 05:03:55.867199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.984 [2024-12-10 05:03:55.867622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-12-10 05:03:55.867666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.984 [2024-12-10 05:03:55.867690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.984 [2024-12-10 05:03:55.868289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.984 [2024-12-10 05:03:55.868508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.984 [2024-12-10 05:03:55.868515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.984 [2024-12-10 05:03:55.868521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.984 [2024-12-10 05:03:55.868530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.984 [2024-12-10 05:03:55.880180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.984 [2024-12-10 05:03:55.880599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-12-10 05:03:55.880617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.984 [2024-12-10 05:03:55.880625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.984 [2024-12-10 05:03:55.880785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.984 [2024-12-10 05:03:55.880946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.984 [2024-12-10 05:03:55.880955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.984 [2024-12-10 05:03:55.880961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.984 [2024-12-10 05:03:55.880967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.984 [2024-12-10 05:03:55.892993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.984 [2024-12-10 05:03:55.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.984 [2024-12-10 05:03:55.893361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.984 [2024-12-10 05:03:55.893368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.984 [2024-12-10 05:03:55.893527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.984 [2024-12-10 05:03:55.893688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.893698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.893704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.893710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:55.905875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.906214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.906232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.906241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.906402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.906563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.906573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.906580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.906586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:55.918612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.919026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.919059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.919084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.919684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.920077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.920095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.920110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.920124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 5865.00 IOPS, 22.91 MiB/s [2024-12-10T04:03:56.122Z] [2024-12-10 05:03:55.935189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.935712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.935765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.935788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.936388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.936739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.936751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.936761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.936770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:55.948199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.948625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.948673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.948698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.949295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.949862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.949872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.949879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.949886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:55.963534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.964037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.964060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.964071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.964340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.964598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.964611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.964621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.964630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:55.976477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.976918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.976963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.976986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.977583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.978054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.978064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.978071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.978077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:55.989409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:55.989797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:55.989814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:55.989822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:55.989982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:55.990143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:55.990153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:55.990159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:55.990172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:56.002177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:56.002586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:56.002604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:56.002611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:56.002770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:56.002930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:56.002942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:56.002949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:56.002955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:56.015043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:56.015461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:56.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:56.015486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:56.015647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.985 [2024-12-10 05:03:56.015807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.985 [2024-12-10 05:03:56.015816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.985 [2024-12-10 05:03:56.015823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.985 [2024-12-10 05:03:56.015830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.985 [2024-12-10 05:03:56.027853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.985 [2024-12-10 05:03:56.028272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.985 [2024-12-10 05:03:56.028319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.985 [2024-12-10 05:03:56.028343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.985 [2024-12-10 05:03:56.028926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.029328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.029339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.029346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.029352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.986 [2024-12-10 05:03:56.040670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.986 [2024-12-10 05:03:56.041097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-12-10 05:03:56.041141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.986 [2024-12-10 05:03:56.041179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.986 [2024-12-10 05:03:56.041765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.042274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.042284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.042290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.042301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.986 [2024-12-10 05:03:56.053481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.986 [2024-12-10 05:03:56.053892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-12-10 05:03:56.053910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.986 [2024-12-10 05:03:56.053917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.986 [2024-12-10 05:03:56.054076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.054260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.054271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.054277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.054284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.986 [2024-12-10 05:03:56.066342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.986 [2024-12-10 05:03:56.066753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-12-10 05:03:56.066770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.986 [2024-12-10 05:03:56.066779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.986 [2024-12-10 05:03:56.066940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.067099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.067108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.067115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.067121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.986 [2024-12-10 05:03:56.079407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.986 [2024-12-10 05:03:56.079827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-12-10 05:03:56.079844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.986 [2024-12-10 05:03:56.079852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.986 [2024-12-10 05:03:56.080027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.080209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.080220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.080227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.080234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.986 [2024-12-10 05:03:56.092224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.986 [2024-12-10 05:03:56.092638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-12-10 05:03:56.092655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.986 [2024-12-10 05:03:56.092663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.986 [2024-12-10 05:03:56.092822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.092983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.092992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.092999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.093005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.986 [2024-12-10 05:03:56.105137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.986 [2024-12-10 05:03:56.105544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.986 [2024-12-10 05:03:56.105562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:04.986 [2024-12-10 05:03:56.105570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:04.986 [2024-12-10 05:03:56.105740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:04.986 [2024-12-10 05:03:56.105910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.986 [2024-12-10 05:03:56.105920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.986 [2024-12-10 05:03:56.105926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.986 [2024-12-10 05:03:56.105934] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.246 [2024-12-10 05:03:56.118097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.246 [2024-12-10 05:03:56.118529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-12-10 05:03:56.118546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.246 [2024-12-10 05:03:56.118553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.246 [2024-12-10 05:03:56.118714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.246 [2024-12-10 05:03:56.118874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.246 [2024-12-10 05:03:56.118883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.246 [2024-12-10 05:03:56.118890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.246 [2024-12-10 05:03:56.118896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.246 [2024-12-10 05:03:56.130876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.246 [2024-12-10 05:03:56.131218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-12-10 05:03:56.131237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.246 [2024-12-10 05:03:56.131245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.246 [2024-12-10 05:03:56.131424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.246 [2024-12-10 05:03:56.131586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.246 [2024-12-10 05:03:56.131596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.246 [2024-12-10 05:03:56.131602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.246 [2024-12-10 05:03:56.131609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.246 [2024-12-10 05:03:56.143721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.246 [2024-12-10 05:03:56.144067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-12-10 05:03:56.144084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.246 [2024-12-10 05:03:56.144092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.144275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.144445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.144455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.144461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.144468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.156630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.157037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.157083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.157106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.157602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.157765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.157774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.157780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.157786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.169508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.169923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.169940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.169947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.170107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.170274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.170288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.170294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.170300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.182388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.182777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.182794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.182802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.182961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.183121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.183130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.183136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.183143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.195202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.195620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.195664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.195687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.196284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.196743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.196752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.196759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.196765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.208030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.208447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.208465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.208473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.208632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.208792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.208802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.208808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.208818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.220873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.221262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.221279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.221287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.221447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.221614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.221623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.221629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.221636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.233656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.234053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.234098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.234122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.234612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.234784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.234793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.234800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.234806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.246522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.246932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.246949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.246957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.247116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.247304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.247314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.247321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.247327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.259285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.259688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.259705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.259713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.259881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.260052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.247 [2024-12-10 05:03:56.260062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.247 [2024-12-10 05:03:56.260069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.247 [2024-12-10 05:03:56.260075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.247 [2024-12-10 05:03:56.272032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.247 [2024-12-10 05:03:56.272456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-12-10 05:03:56.272508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.247 [2024-12-10 05:03:56.272532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.247 [2024-12-10 05:03:56.273116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.247 [2024-12-10 05:03:56.273716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.273743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.273765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.273785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.287159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.287613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.287635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.287646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.287902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.288158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.288179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.288189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.288199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.300050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.300482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.300527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.300550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.300955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.301125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.301135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.301141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.301148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.312817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.313243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.313291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.313315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.313818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.313980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.313988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.313994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.314000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.325574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.326005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.326029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.326194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.326379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.326389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.326395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.326402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.338582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.338938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.338956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.338964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.339139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.339321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.339334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.339343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.339352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.351420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.351756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.351773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.351781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.351940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.352101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.352110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.352117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.352123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.364372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.248 [2024-12-10 05:03:56.364788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-12-10 05:03:56.364832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.248 [2024-12-10 05:03:56.364856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.248 [2024-12-10 05:03:56.365450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.248 [2024-12-10 05:03:56.365927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.248 [2024-12-10 05:03:56.365936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.248 [2024-12-10 05:03:56.365942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.248 [2024-12-10 05:03:56.365949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.248 [2024-12-10 05:03:56.377421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.377848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.377865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.377873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.378047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.378227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.378238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.378245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.378256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.390297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.390724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.390769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.390792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.391233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.391403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.391411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.391418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.391424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.403270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.403670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.403688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.403695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.403864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.404035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.404045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.404052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.404058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.416071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.416496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.416541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.416566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.417060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.417246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.417256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.417262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.417269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.428884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.429311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.429357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.429381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.429964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.430171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.430181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.430188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.430213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.441691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.442087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.442132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.442155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.442755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.443260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.443270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.443277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.443284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.454447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.454865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.454908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.454932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.455532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.456122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.456151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.456157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.509 [2024-12-10 05:03:56.456164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.509 [2024-12-10 05:03:56.467197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.509 [2024-12-10 05:03:56.467636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.509 [2024-12-10 05:03:56.467680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.509 [2024-12-10 05:03:56.467704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.509 [2024-12-10 05:03:56.468246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.509 [2024-12-10 05:03:56.468417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.509 [2024-12-10 05:03:56.468426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.509 [2024-12-10 05:03:56.468433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.468439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.480095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.480500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.480546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.480570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.481019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.481186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.481196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.481202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.481209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 777694 Killed "${NVMF_APP[@]}" "$@" 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.510 [2024-12-10 05:03:56.493114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.510 [2024-12-10 05:03:56.493569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.493587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.493595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.493784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.493960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.493970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.493977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.493983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=778928 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 778928 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 778928 ']' 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.510 05:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.510 [2024-12-10 05:03:56.506224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.506591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.506607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.506615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.506788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.506961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.506968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.506975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.506981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.519220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.519646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.519663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.519670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.519844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.520017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.520026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.520032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.520038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.532207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.532642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.532658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.532665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.532838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.533015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.533024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.533031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.533037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.545205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.545635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.545652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.545659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.545828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.545996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.546005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.546011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.546018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.548852] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:05.510 [2024-12-10 05:03:56.548891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.510 [2024-12-10 05:03:56.558382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.558818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.558835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.558842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.559011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.559186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.559195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.559202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.559208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.571406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.571820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.571837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.510 [2024-12-10 05:03:56.571844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.510 [2024-12-10 05:03:56.572012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.510 [2024-12-10 05:03:56.572189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.510 [2024-12-10 05:03:56.572197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.510 [2024-12-10 05:03:56.572203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.510 [2024-12-10 05:03:56.572210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.510 [2024-12-10 05:03:56.584387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.510 [2024-12-10 05:03:56.584731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.510 [2024-12-10 05:03:56.584747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.511 [2024-12-10 05:03:56.584755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.511 [2024-12-10 05:03:56.584925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.511 [2024-12-10 05:03:56.585094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.511 [2024-12-10 05:03:56.585102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.511 [2024-12-10 05:03:56.585109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.511 [2024-12-10 05:03:56.585115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.511 [2024-12-10 05:03:56.597407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.511 [2024-12-10 05:03:56.597823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.511 [2024-12-10 05:03:56.597839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.511 [2024-12-10 05:03:56.597847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.511 [2024-12-10 05:03:56.598020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.511 [2024-12-10 05:03:56.598200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.511 [2024-12-10 05:03:56.598209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.511 [2024-12-10 05:03:56.598216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.511 [2024-12-10 05:03:56.598222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.511 [2024-12-10 05:03:56.610342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.511 [2024-12-10 05:03:56.610691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.511 [2024-12-10 05:03:56.610708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.511 [2024-12-10 05:03:56.610715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.511 [2024-12-10 05:03:56.610883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.511 [2024-12-10 05:03:56.611052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.511 [2024-12-10 05:03:56.611061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.511 [2024-12-10 05:03:56.611070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.511 [2024-12-10 05:03:56.611077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.511 [2024-12-10 05:03:56.623329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.511 [2024-12-10 05:03:56.623732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.511 [2024-12-10 05:03:56.623749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.511 [2024-12-10 05:03:56.623756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.511 [2024-12-10 05:03:56.623925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.511 [2024-12-10 05:03:56.624093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.511 [2024-12-10 05:03:56.624102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.511 [2024-12-10 05:03:56.624108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.511 [2024-12-10 05:03:56.624114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.511 [2024-12-10 05:03:56.625584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:05.511 [2024-12-10 05:03:56.636300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.511 [2024-12-10 05:03:56.636755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.511 [2024-12-10 05:03:56.636774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.511 [2024-12-10 05:03:56.636782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.511 [2024-12-10 05:03:56.636956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.511 [2024-12-10 05:03:56.637131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.511 [2024-12-10 05:03:56.637142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.511 [2024-12-10 05:03:56.637149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.511 [2024-12-10 05:03:56.637157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.771 [2024-12-10 05:03:56.649315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.771 [2024-12-10 05:03:56.649747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.771 [2024-12-10 05:03:56.649764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.771 [2024-12-10 05:03:56.649773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.771 [2024-12-10 05:03:56.649943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.771 [2024-12-10 05:03:56.650113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.771 [2024-12-10 05:03:56.650122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.771 [2024-12-10 05:03:56.650129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.771 [2024-12-10 05:03:56.650139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.771 [2024-12-10 05:03:56.662238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.771 [2024-12-10 05:03:56.662666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.771 [2024-12-10 05:03:56.662683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.771 [2024-12-10 05:03:56.662690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.771 [2024-12-10 05:03:56.662859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.771 [2024-12-10 05:03:56.663029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.771 [2024-12-10 05:03:56.663038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.771 [2024-12-10 05:03:56.663044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.771 [2024-12-10 05:03:56.663051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.771 [2024-12-10 05:03:56.665612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.771 [2024-12-10 05:03:56.665637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.771 [2024-12-10 05:03:56.665645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.771 [2024-12-10 05:03:56.665651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.771 [2024-12-10 05:03:56.665656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.771 [2024-12-10 05:03:56.666884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.772 [2024-12-10 05:03:56.666989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.772 [2024-12-10 05:03:56.666990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.772 [2024-12-10 05:03:56.675256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.675709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.675729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.675738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.675914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.676090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.676098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.676105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.676112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.688360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.688790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.688810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.688819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.688999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.689180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.689188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.689196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.689203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.701434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.701860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.701880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.701889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.702065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.702245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.702254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.702263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.702270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.714519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.714959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.714979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.714988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.715163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.715343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.715353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.715360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.715368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.727603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.728034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.728053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.728062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.728241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.728417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.728430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.728438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.728445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.740662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.741003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.741020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.741028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.741207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.741381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.741390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.741397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.741403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.753789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.754197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.754215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.754222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.754395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.754569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.754577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.754584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.754591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.766804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.767210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.767227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.767234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.767408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.767582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.767590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.767598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.767604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.779813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.780220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.780237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.780244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.780418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.780591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.780600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.780606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.780612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.792834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.793169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.793186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.793193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.793366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.793540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.793548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.793555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.793561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.805924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.806342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.806360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.806367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.806540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.806714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.806723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.806730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.806736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.819177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.819543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.819564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.772 [2024-12-10 05:03:56.819572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.772 [2024-12-10 05:03:56.819746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.772 [2024-12-10 05:03:56.819920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.772 [2024-12-10 05:03:56.819929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.772 [2024-12-10 05:03:56.819936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.772 [2024-12-10 05:03:56.819942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.772 [2024-12-10 05:03:56.832198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.772 [2024-12-10 05:03:56.832581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.772 [2024-12-10 05:03:56.832598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.773 [2024-12-10 05:03:56.832605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.773 [2024-12-10 05:03:56.832779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.773 [2024-12-10 05:03:56.832953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.773 [2024-12-10 05:03:56.832962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.773 [2024-12-10 05:03:56.832968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.773 [2024-12-10 05:03:56.832975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.773 [2024-12-10 05:03:56.845237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.773 [2024-12-10 05:03:56.845588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-12-10 05:03:56.845604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.773 [2024-12-10 05:03:56.845612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.773 [2024-12-10 05:03:56.845785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.773 [2024-12-10 05:03:56.845958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.773 [2024-12-10 05:03:56.845966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.773 [2024-12-10 05:03:56.845973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.773 [2024-12-10 05:03:56.845979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.773 [2024-12-10 05:03:56.858218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.773 [2024-12-10 05:03:56.858577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-12-10 05:03:56.858595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.773 [2024-12-10 05:03:56.858603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.773 [2024-12-10 05:03:56.858780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.773 [2024-12-10 05:03:56.858953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.773 [2024-12-10 05:03:56.858962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.773 [2024-12-10 05:03:56.858968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.773 [2024-12-10 05:03:56.858974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.773 [2024-12-10 05:03:56.871219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.773 [2024-12-10 05:03:56.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-12-10 05:03:56.871535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.773 [2024-12-10 05:03:56.871542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.773 [2024-12-10 05:03:56.871716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.773 [2024-12-10 05:03:56.871890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.773 [2024-12-10 05:03:56.871898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.773 [2024-12-10 05:03:56.871904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.773 [2024-12-10 05:03:56.871911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.773 [2024-12-10 05:03:56.884318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.773 [2024-12-10 05:03:56.884679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-12-10 05:03:56.884696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.773 [2024-12-10 05:03:56.884703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.773 [2024-12-10 05:03:56.884876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.773 [2024-12-10 05:03:56.885049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.773 [2024-12-10 05:03:56.885057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.773 [2024-12-10 05:03:56.885064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.773 [2024-12-10 05:03:56.885070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.773 [2024-12-10 05:03:56.897293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.773 [2024-12-10 05:03:56.897658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.773 [2024-12-10 05:03:56.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:05.773 [2024-12-10 05:03:56.897682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:05.773 [2024-12-10 05:03:56.897855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:05.773 [2024-12-10 05:03:56.898029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.773 [2024-12-10 05:03:56.898038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.773 [2024-12-10 05:03:56.898048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.773 [2024-12-10 05:03:56.898054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 [2024-12-10 05:03:56.910332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.910713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.910729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.033 [2024-12-10 05:03:56.910737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.033 [2024-12-10 05:03:56.910909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.033 [2024-12-10 05:03:56.911082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.033 [2024-12-10 05:03:56.911090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.033 [2024-12-10 05:03:56.911097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.033 [2024-12-10 05:03:56.911103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 [2024-12-10 05:03:56.923356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.923704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.923721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.033 [2024-12-10 05:03:56.923728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.033 [2024-12-10 05:03:56.923901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.033 [2024-12-10 05:03:56.924082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.033 [2024-12-10 05:03:56.924091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.033 [2024-12-10 05:03:56.924098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.033 [2024-12-10 05:03:56.924104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 4887.50 IOPS, 19.09 MiB/s [2024-12-10T04:03:57.170Z] [2024-12-10 05:03:56.936481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.936910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.936927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.033 [2024-12-10 05:03:56.936934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.033 [2024-12-10 05:03:56.937107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.033 [2024-12-10 05:03:56.937286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.033 [2024-12-10 05:03:56.937296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.033 [2024-12-10 05:03:56.937302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.033 [2024-12-10 05:03:56.937308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 [2024-12-10 05:03:56.949563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.949973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.949990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.033 [2024-12-10 05:03:56.949997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.033 [2024-12-10 05:03:56.950174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.033 [2024-12-10 05:03:56.950348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.033 [2024-12-10 05:03:56.950356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.033 [2024-12-10 05:03:56.950363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.033 [2024-12-10 05:03:56.950369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 [2024-12-10 05:03:56.962562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.962849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.962865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.033 [2024-12-10 05:03:56.962872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.033 [2024-12-10 05:03:56.963045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.033 [2024-12-10 05:03:56.963224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.033 [2024-12-10 05:03:56.963234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.033 [2024-12-10 05:03:56.963240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.033 [2024-12-10 05:03:56.963246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 [2024-12-10 05:03:56.975642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.975923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.975940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.033 [2024-12-10 05:03:56.975947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.033 [2024-12-10 05:03:56.976121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.033 [2024-12-10 05:03:56.976300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.033 [2024-12-10 05:03:56.976308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.033 [2024-12-10 05:03:56.976315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.033 [2024-12-10 05:03:56.976321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.033 [2024-12-10 05:03:56.988738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.033 [2024-12-10 05:03:56.989096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-12-10 05:03:56.989116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:56.989123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:56.989301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:56.989475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:56.989483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:56.989490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:56.989496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.001739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.002074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.002091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.002098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.002275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.002448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.002456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.002463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.002469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.014864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.015227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.015244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.015251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.015424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.015598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.015607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.015613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.015620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.027878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.028246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.028264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.028272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.028449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.028623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.028632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.028638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.028644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.040885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.041287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.041305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.041312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.041486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.041660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.041668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.041676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.041682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.053915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.054341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.054358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.054366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.054539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.054712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.054721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.054727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.054733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.066983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.067360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.067377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.067384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.067557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.067731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.067742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.067749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.067755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.079992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.080347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.080364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.080371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.080544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.080718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.080726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.080733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.080739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.092977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.093340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.093357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.093365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.093538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.093712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.093720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.093726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.093732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.106016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.106324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.106341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.106348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.106522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.106697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.106706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.106713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.106719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.119143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.119484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.119501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.119509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.119682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.119856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.119864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.119870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.119876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.132078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.132431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.132448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.132455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.132628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.132803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.132811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.132817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.132823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.145067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.145438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.145456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.145463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.145637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.145810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.145818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.145825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.145831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.034 [2024-12-10 05:03:57.158067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.034 [2024-12-10 05:03:57.158429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-12-10 05:03:57.158449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.034 [2024-12-10 05:03:57.158457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.034 [2024-12-10 05:03:57.158630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.034 [2024-12-10 05:03:57.158802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.034 [2024-12-10 05:03:57.158811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.034 [2024-12-10 05:03:57.158817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.034 [2024-12-10 05:03:57.158823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.294 [2024-12-10 05:03:57.171081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.294 [2024-12-10 05:03:57.171373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.294 [2024-12-10 05:03:57.171391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.294 [2024-12-10 05:03:57.171398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.294 [2024-12-10 05:03:57.171572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.294 [2024-12-10 05:03:57.171746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.294 [2024-12-10 05:03:57.171754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.294 [2024-12-10 05:03:57.171760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.294 [2024-12-10 05:03:57.171767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.294 [2024-12-10 05:03:57.184188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.294 [2024-12-10 05:03:57.184478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.294 [2024-12-10 05:03:57.184494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.294 [2024-12-10 05:03:57.184502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.294 [2024-12-10 05:03:57.184676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.294 [2024-12-10 05:03:57.184850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.294 [2024-12-10 05:03:57.184859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.294 [2024-12-10 05:03:57.184866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.294 [2024-12-10 05:03:57.184872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.294 [2024-12-10 05:03:57.197289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.294 [2024-12-10 05:03:57.197678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.294 [2024-12-10 05:03:57.197695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.294 [2024-12-10 05:03:57.197702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.294 [2024-12-10 05:03:57.197881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.294 [2024-12-10 05:03:57.198055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.294 [2024-12-10 05:03:57.198064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.294 [2024-12-10 05:03:57.198071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.294 [2024-12-10 05:03:57.198077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.294 [2024-12-10 05:03:57.210325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.294 [2024-12-10 05:03:57.210686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.294 [2024-12-10 05:03:57.210703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.294 [2024-12-10 05:03:57.210710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.294 [2024-12-10 05:03:57.210884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.294 [2024-12-10 05:03:57.211058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.294 [2024-12-10 05:03:57.211066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.294 [2024-12-10 05:03:57.211073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.294 [2024-12-10 05:03:57.211079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.294 [2024-12-10 05:03:57.223325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.223682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.223698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.223706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.223879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.224052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.224061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.224067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.224073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.236334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.236764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.236780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.236787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.236960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.237135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.237146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.237153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.237159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.249391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.249816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.249833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.249841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.250014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.250192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.250201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.250207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.250213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.262446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.262880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.262896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.262903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.263075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.263253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.263262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.263268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.263275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.275496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.275900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.275916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.275923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.276096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.276274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.276283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.276289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.276295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.288529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.288963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.288980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.288987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.289160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.289339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.289348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.289354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.289360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.301579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.302007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.302023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.302030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.302208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.302383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.302392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.302398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.302404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.314626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.315032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.315048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.315055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.315232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.315405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.315414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.315420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.315426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.327662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.328090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.328110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.328118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.328296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.328470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.328479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.328486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.328492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.340725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.341079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.295 [2024-12-10 05:03:57.341096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.295 [2024-12-10 05:03:57.341103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.295 [2024-12-10 05:03:57.341279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.295 [2024-12-10 05:03:57.341454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.295 [2024-12-10 05:03:57.341462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.295 [2024-12-10 05:03:57.341468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.295 [2024-12-10 05:03:57.341475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.295 [2024-12-10 05:03:57.353688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.295 [2024-12-10 05:03:57.354118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.296 [2024-12-10 05:03:57.354135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.296 [2024-12-10 05:03:57.354142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.296 [2024-12-10 05:03:57.354319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.296 [2024-12-10 05:03:57.354493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.296 [2024-12-10 05:03:57.354501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.296 [2024-12-10 05:03:57.354507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.296 [2024-12-10 05:03:57.354514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.296 [2024-12-10 05:03:57.366733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.296 [2024-12-10 05:03:57.367085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.296 [2024-12-10 05:03:57.367102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.296 [2024-12-10 05:03:57.367111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.296 [2024-12-10 05:03:57.367292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.296 [2024-12-10 05:03:57.367467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.296 [2024-12-10 05:03:57.367476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.296 [2024-12-10 05:03:57.367484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.296 [2024-12-10 05:03:57.367491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.296 [2024-12-10 05:03:57.379744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.296 [2024-12-10 05:03:57.380105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.296 [2024-12-10 05:03:57.380121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.296 [2024-12-10 05:03:57.380129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.296 [2024-12-10 05:03:57.380338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.296 [2024-12-10 05:03:57.380512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.296 [2024-12-10 05:03:57.380520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.296 [2024-12-10 05:03:57.380527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.296 [2024-12-10 05:03:57.380533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.296 [2024-12-10 05:03:57.392809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.296 [2024-12-10 05:03:57.393104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.296 [2024-12-10 05:03:57.393121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.296 [2024-12-10 05:03:57.393128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.296 [2024-12-10 05:03:57.393305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.296 [2024-12-10 05:03:57.393480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.296 [2024-12-10 05:03:57.393487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.296 [2024-12-10 05:03:57.393494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.296 [2024-12-10 05:03:57.393500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.296 [2024-12-10 05:03:57.405860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.296 [2024-12-10 05:03:57.406133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.296 [2024-12-10 05:03:57.406151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.296 [2024-12-10 05:03:57.406158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.296 [2024-12-10 05:03:57.406357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.296 [2024-12-10 05:03:57.406545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.296 [2024-12-10 05:03:57.406554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.296 [2024-12-10 05:03:57.406561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.296 [2024-12-10 05:03:57.406568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.296 [2024-12-10 05:03:57.409370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.296 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.296 [2024-12-10 05:03:57.418842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.296 [2024-12-10 05:03:57.419249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.296 [2024-12-10 05:03:57.419266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.296 [2024-12-10 05:03:57.419274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.296 [2024-12-10 05:03:57.419447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.296 [2024-12-10 05:03:57.419621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.296 [2024-12-10 05:03:57.419630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.296 [2024-12-10 05:03:57.419636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.296 [2024-12-10 05:03:57.419643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.556 [2024-12-10 05:03:57.431939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.556 [2024-12-10 05:03:57.432354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-12-10 05:03:57.432372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.556 [2024-12-10 05:03:57.432380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.556 [2024-12-10 05:03:57.432554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.556 [2024-12-10 05:03:57.432728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.556 [2024-12-10 05:03:57.432737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.556 [2024-12-10 05:03:57.432747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.556 [2024-12-10 05:03:57.432754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.556 [2024-12-10 05:03:57.444996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.556 [2024-12-10 05:03:57.445425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-12-10 05:03:57.445443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.556 [2024-12-10 05:03:57.445451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.556 [2024-12-10 05:03:57.445624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.556 [2024-12-10 05:03:57.445797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.556 [2024-12-10 05:03:57.445805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.556 [2024-12-10 05:03:57.445812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.556 [2024-12-10 05:03:57.445818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.556 Malloc0 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.556 [2024-12-10 05:03:57.458051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.556 [2024-12-10 05:03:57.458482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-12-10 05:03:57.458499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.556 [2024-12-10 05:03:57.458506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.556 [2024-12-10 05:03:57.458679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.556 [2024-12-10 05:03:57.458853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.556 [2024-12-10 05:03:57.458861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.556 [2024-12-10 05:03:57.458867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.556 [2024-12-10 05:03:57.458874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.556 [2024-12-10 05:03:57.471087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.556 [2024-12-10 05:03:57.471528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.556 [2024-12-10 05:03:57.471545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc760 with addr=10.0.0.2, port=4420 00:27:06.556 [2024-12-10 05:03:57.471552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc760 is same with the state(6) to be set 00:27:06.556 [2024-12-10 05:03:57.471725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc760 (9): Bad file descriptor 00:27:06.556 [2024-12-10 05:03:57.471898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.556 [2024-12-10 05:03:57.471906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.556 [2024-12-10 05:03:57.471913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.556 [2024-12-10 05:03:57.471920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.556 [2024-12-10 05:03:57.472660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.556 05:03:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 778025 00:27:06.556 [2024-12-10 05:03:57.484142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.556 [2024-12-10 05:03:57.549686] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:08.194 4806.29 IOPS, 18.77 MiB/s [2024-12-10T04:04:00.267Z] 5631.12 IOPS, 22.00 MiB/s [2024-12-10T04:04:01.204Z] 6283.33 IOPS, 24.54 MiB/s [2024-12-10T04:04:02.142Z] 6791.10 IOPS, 26.53 MiB/s [2024-12-10T04:04:03.078Z] 7226.55 IOPS, 28.23 MiB/s [2024-12-10T04:04:04.015Z] 7584.17 IOPS, 29.63 MiB/s [2024-12-10T04:04:05.392Z] 7883.85 IOPS, 30.80 MiB/s [2024-12-10T04:04:06.329Z] 8135.64 IOPS, 31.78 MiB/s [2024-12-10T04:04:06.329Z] 8361.93 IOPS, 32.66 MiB/s 00:27:15.192 Latency(us) 00:27:15.192 [2024-12-10T04:04:06.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:15.192 Verification LBA range: start 0x0 length 0x4000 00:27:15.192 Nvme1n1 : 15.01 8363.68 32.67 13072.60 0.00 5951.93 433.01 13544.11 00:27:15.192 [2024-12-10T04:04:06.329Z] =================================================================================================================== 00:27:15.192 [2024-12-10T04:04:06.329Z] Total : 8363.68 32.67 13072.60 0.00 5951.93 433.01 13544.11 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:15.192 rmmod nvme_tcp 00:27:15.192 rmmod nvme_fabrics 00:27:15.192 rmmod nvme_keyring 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 778928 ']' 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 778928 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 778928 ']' 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 778928 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778928 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778928' 00:27:15.192 killing process with pid 778928 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 778928 00:27:15.192 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 778928 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.452 05:04:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.990 00:27:17.990 real 0m25.996s 00:27:17.990 user 1m0.846s 00:27:17.990 sys 0m6.659s 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.990 ************************************ 00:27:17.990 END TEST nvmf_bdevperf 00:27:17.990 ************************************ 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.990 ************************************ 00:27:17.990 START TEST nvmf_target_disconnect 00:27:17.990 ************************************ 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:17.990 * Looking for test storage... 00:27:17.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:17.990 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.991 05:04:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:23.265 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:23.265 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:23.265 Found net devices under 0000:af:00.0: cvl_0_0 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:23.265 Found net devices under 0000:af:00.1: cvl_0_1 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.265 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.266 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:27:23.525 00:27:23.525 --- 10.0.0.2 ping statistics --- 00:27:23.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.525 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:27:23.525 00:27:23.525 --- 10.0.0.1 ping statistics --- 00:27:23.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.525 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.525 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.784 ************************************ 00:27:23.784 START TEST nvmf_target_disconnect_tc1 00:27:23.784 ************************************ 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.784 [2024-12-10 05:04:14.803449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.784 [2024-12-10 05:04:14.803493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14360b0 with addr=10.0.0.2, port=4420 00:27:23.784 [2024-12-10 05:04:14.803529] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:23.784 [2024-12-10 05:04:14.803545] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:23.784 [2024-12-10 05:04:14.803551] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:23.784 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:23.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:23.784 Initializing NVMe Controllers 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:23.784 00:27:23.784 real 0m0.116s 00:27:23.784 user 0m0.051s 00:27:23.784 sys 0m0.064s 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:23.784 ************************************ 00:27:23.784 END TEST nvmf_target_disconnect_tc1 00:27:23.784 ************************************ 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.784 ************************************ 00:27:23.784 START TEST nvmf_target_disconnect_tc2 00:27:23.784 ************************************ 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=783999 00:27:23.784 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 783999 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 783999 ']' 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.785 05:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.044 [2024-12-10 05:04:14.947593] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:24.044 [2024-12-10 05:04:14.947631] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.044 [2024-12-10 05:04:15.024451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.044 [2024-12-10 05:04:15.066582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.044 [2024-12-10 05:04:15.066619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.044 [2024-12-10 05:04:15.066626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.044 [2024-12-10 05:04:15.066632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.044 [2024-12-10 05:04:15.066637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.044 [2024-12-10 05:04:15.068124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:24.044 [2024-12-10 05:04:15.068238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:24.044 [2024-12-10 05:04:15.068343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.044 [2024-12-10 05:04:15.068345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.979 Malloc0 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.979 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.979 [2024-12-10 05:04:15.860387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.980 [2024-12-10 05:04:15.892645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=784238 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:24.980 05:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.891 05:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 783999 00:27:26.891 05:04:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Write completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 starting I/O failed 00:27:26.891 Read completed with error (sct=0, sc=8) 00:27:26.891 [2024-12-10 05:04:17.920869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 [2024-12-10 05:04:17.921063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 [2024-12-10 05:04:17.921260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Write completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 Read completed with error (sct=0, sc=8) 00:27:26.892 starting I/O failed 00:27:26.892 [2024-12-10 05:04:17.921449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.892 [2024-12-10 05:04:17.921715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.921735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.921880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.921890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.921959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.921968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.922122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.922140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.922330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.922481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.922491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.922662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.922672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.892 [2024-12-10 05:04:17.922810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.892 [2024-12-10 05:04:17.922821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.892 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.922979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.922990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.923939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.923950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.924978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.924989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.925182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.925325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.925334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.925472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.925483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.925573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.925581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.925667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.925677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.925849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.925858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.926077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.926114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.926362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.926396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.926588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.926619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.926704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.926948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.926958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.927152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.927162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.927335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.927346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.927559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.927591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.927764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.927796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.928067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.928099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.928266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.928299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.928567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.928599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.928843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.928875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.929057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.929327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.929361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.929602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.929633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.929863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.893 [2024-12-10 05:04:17.929895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.893 qpair failed and we were unable to recover it. 00:27:26.893 [2024-12-10 05:04:17.930138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.930177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.930314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.930345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.930593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.930603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.930756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.930766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.930852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.930862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.931087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.931097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.931222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.931232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.931375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.931386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.931478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.931487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.931710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.931720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.931961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.931971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.932222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.932236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.932486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.932502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.932752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.932765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.932914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.932927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.933057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.933070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.933201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.933216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.933373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.933386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.933594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.933607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.933831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.933844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.934067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.934080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.934155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.934177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.934397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.934410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.934638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.934651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.934755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.934771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.934865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.934880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.935102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.935115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.935313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.935327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.935429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.935442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.935683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.935696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.935919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.935932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.936079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.936092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.936255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.936269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.936436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.936449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.936672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.936685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.936892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.936906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.937122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.937135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.937356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.894 [2024-12-10 05:04:17.937370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.894 qpair failed and we were unable to recover it. 00:27:26.894 [2024-12-10 05:04:17.937517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.937531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.937675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.937688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.937890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.937904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.938128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.938142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.938327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.938341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.938524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.938555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.938755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.938787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.939022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.939054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.939305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.939319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.939460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.939474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.939670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.939683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.939889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.939902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.940970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.940983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.941229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.941243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.941465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.941479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.941654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.941667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.941916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.941947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.942210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.942245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.942433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.942464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.942599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.942631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.942764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.942795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.942934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.942966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.943134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.943185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.943476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.943507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.943734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.943765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.944046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.944078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.944335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.944352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.895 qpair failed and we were unable to recover it. 00:27:26.895 [2024-12-10 05:04:17.944504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.895 [2024-12-10 05:04:17.944520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.944751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.944781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.945097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.945128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.945325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.945357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.945593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.945609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.945842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.945857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.946085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.946100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.946306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.946323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.946462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.946477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.946712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.946744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.946996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.947027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.947207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.947224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.947455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.947487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.947723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.947754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.947928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.947961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.948250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.948283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.948535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.948566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.948826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.948856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.949091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.949123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.949340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.949355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.949581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.949619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.949789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.949820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.950003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.950034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.950224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.950384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.950415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.950693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.950709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.950925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.950940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.951073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.951089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.951262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.951279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.951478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.951493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.951635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.951650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.951796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.896 [2024-12-10 05:04:17.951812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.896 qpair failed and we were unable to recover it. 00:27:26.896 [2024-12-10 05:04:17.951957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.951972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.952141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.952157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.952300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.952316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.952399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.952413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.952574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.952590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.952746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.952762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.952919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.952950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.953211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.953245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.953376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.953407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.953666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.953682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.953883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.953899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.954162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.954187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.954341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.954357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.954511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.954526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.954780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.954796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.955021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.955037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.955187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.955204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.955365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.955380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.955539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.955554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.955720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.955765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.956029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.956061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.956322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.956356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.956599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.956631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.956867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.956898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.957185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.957482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.957515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.957750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.957765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.957916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.957931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.958157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.958204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.958395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.958426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.958658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.958929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.958945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.959172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.959343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.959358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.959509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.959524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.959613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.959626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.959870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.959885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.960061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.960077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.897 [2024-12-10 05:04:17.960250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.897 [2024-12-10 05:04:17.960267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.897 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.960496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.960528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.960764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.960795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.961040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.961071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.961253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.961270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.961504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.961535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.961818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.961849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.962105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.962137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.962441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.962474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.962741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.962757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.962904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.962919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.963066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.963082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.963239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.963255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.963465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.963497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.963680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.963712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.964008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.964039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.964176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.964210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.964388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.964422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.964677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.964709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.964992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.965023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.965196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.965231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.965410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.965441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.965723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.965942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.965957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.966096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.966111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.966337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.966355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.966504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.966520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.966749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.966764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.966991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.967024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.967160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.967202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.967395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.967432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.967664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.967679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.967881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.967897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.968121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.968137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.968417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.968434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.968642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.968658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.968860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.968876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.969025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.898 [2024-12-10 05:04:17.969040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.898 qpair failed and we were unable to recover it. 00:27:26.898 [2024-12-10 05:04:17.969135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.969151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.969248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.969262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.969432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.969448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.969581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.969597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.969689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.969703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.969906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.969925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.970002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.970016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.970223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.970239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.970433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.970449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.970654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.970670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.970807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.970824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.971057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.971088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.971256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.971288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.971417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.971585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.971601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.971669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.971683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.971831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.971847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.972079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.972296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.972312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.972541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.972558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.972715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.972731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.972930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.972961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.973132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.973163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.973469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.973501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.973741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.973773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.974017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.974048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.974318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.974353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.974547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.974578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.974840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.974857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.975005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.975021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.975274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.975291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.975463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.975479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.975622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.975641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.975831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.975862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.976099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.976131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.976328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.976361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.976544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.976575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.976838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.976869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.899 [2024-12-10 05:04:17.977200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.899 [2024-12-10 05:04:17.977235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.899 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.977447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.977478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.977741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.977758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.977967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.977982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.978209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.978227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.978447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.978463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.978673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.978689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.978921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.978937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.979097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.979113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.979351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.979385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.979594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.979624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.979863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.979894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.980155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.980196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.980484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.980524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.980772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.980788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.980953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.980969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.981070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.981085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.981256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.981273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.981485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.981769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.981801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.981990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.982021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.982298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.982332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.982502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.982534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.982788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.982803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.982977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.982992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.983149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.983202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.983394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.983425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.983689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.983721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.984004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.984036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.984319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.984354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.984596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.984628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.984848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.985085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.985117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.900 [2024-12-10 05:04:17.985251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.900 [2024-12-10 05:04:17.985267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.900 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.985419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.985437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.985660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.985692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.985887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.985919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.986208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.986243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.986530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.986562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.986747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.986778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.987062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.987247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.987264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.987405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.987421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.987590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.987621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.987811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.987842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.988031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.988063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.988248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.988282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.988547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.988579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.988861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.988877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.988975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.988990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.989221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.989238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.989444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.989459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.989696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.989712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.989976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.989992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.990216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.990232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.990466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.990482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.990637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.990652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.990862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.990894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.991132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.991163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.991418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.991451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.991693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.991709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.991925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.991969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.992215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.992235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.992379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.992395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.992553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.992569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.992712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.992728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.992864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.992880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.993128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.993144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.993309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.993326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.993546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.993768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.993783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.993987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.901 [2024-12-10 05:04:17.994003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.901 qpair failed and we were unable to recover it. 00:27:26.901 [2024-12-10 05:04:17.994232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.994249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.994329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.994344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.994458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.994473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.994602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.994618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.994842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.995164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.995208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.995457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.995489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.995598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.995631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.995896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.995927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.996118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.996150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.996368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.996400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.996685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.996701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.996876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.996892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.997050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.997066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.997232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.997248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.997357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.997373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.997641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.997660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.997849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.997865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.998089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.998121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.998369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.998402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.998663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.998695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.998980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.999012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.999189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.999223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.999476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.999508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:17.999801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:17.999833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.000118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.000151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.000347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.000380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.000608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.000640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.000905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.000936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.001237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.001254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.001479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.001513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.001724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.001755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.001948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.001979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.002237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.002272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.002532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.002563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.002846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.002862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.003117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.003134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.003391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.003408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.003663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.003679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.902 [2024-12-10 05:04:18.003836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.902 [2024-12-10 05:04:18.003851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.902 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.004089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.004121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.004399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.004432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.004722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.004754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.005024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.005062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.005204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.005237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.005521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.005536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.005674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.005689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.005851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.005868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.006062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.006078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.006334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.006351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.006501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.006518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.006622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.006638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.006817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.006833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.007012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.007027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.007245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.007286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.007484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.007517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.007726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.007758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.007953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.007986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.008235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.008269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.008450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.008466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.008714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.008747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.008946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.008979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.009237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.009271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.009535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.009551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.009802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.009818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.009983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.009999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.010153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.010185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.010418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.010450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.010753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.010786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.010960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.010992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.011269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.011310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.011498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.011514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.011728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.011760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.011901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.011933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.012208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.012243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.012546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.012579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.012836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.903 [2024-12-10 05:04:18.012867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.903 qpair failed and we were unable to recover it. 00:27:26.903 [2024-12-10 05:04:18.013115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.013148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.013346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.013378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.013570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.013602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.013778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.013795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.013966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.013982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.014190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.014208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.014342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.014358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.014606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.014694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.015033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.015069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.015359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.015398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.015578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.015614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.015883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.015915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.016118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.016150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.016350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.904 [2024-12-10 05:04:18.016382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:26.904 qpair failed and we were unable to recover it. 00:27:26.904 [2024-12-10 05:04:18.016574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.016605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.016814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.016832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.017005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.017036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.017298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.017333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.017614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.017659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.017838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.017856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.018011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.018027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.018274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.018309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.018490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.018506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.018661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.018702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.018993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.019025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.019293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.019328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.019598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.019614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.019844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.019860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.020022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.020039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.020206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.020224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.020380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.020397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.020635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.020651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.020884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.020901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.021145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.021161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.021313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.021330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.021561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.021593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.021859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.021891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.022130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.022163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.022350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.022383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.022709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.022942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.022974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.023224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.023259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.023557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.023573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.023831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.187 [2024-12-10 05:04:18.023847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.187 qpair failed and we were unable to recover it. 00:27:27.187 [2024-12-10 05:04:18.023987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.024003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.024240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.024273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.024534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.024566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.024756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.024788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.025061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.025093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.025360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.025394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.025570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.025586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.025827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.025859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.026045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.026078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.026275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.026329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.026493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.026509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.026688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.026721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.026988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.027021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.027269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.027303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.027492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.027508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.027660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.027676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.027781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.027798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.027976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.027997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.028233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.028250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.028493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.028525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.028817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.028849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.029119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.029150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.029367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.029401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.029667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.029698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.029908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.029923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.030180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.030197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.030426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.030442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.030676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.030692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.030935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.030951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.031187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.031204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.031444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.031460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.031703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.031719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.031930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.031947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.032179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.032196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.032385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.032566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.032597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.188 [2024-12-10 05:04:18.032841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.188 [2024-12-10 05:04:18.032872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.188 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.033076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.033107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.033363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.033397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.033675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.033707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.033901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.033933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.034186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.034221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.034517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.034548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.034749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.034781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.035028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.035048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.035145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.035161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.035332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.035349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.035525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.035540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.035757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.035789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.036071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.036103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.036303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.036336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.036523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.036555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.036770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.036786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.037029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.037045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.037149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.037170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.037395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.037411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.037658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.037674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.037841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.037858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.038088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.038122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.038325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.038357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.038660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.038691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.038955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.038986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.039268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.039302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.039580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.039596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.039808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.039824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.040009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.040041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.040188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.040222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.040357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.040389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.040657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.040689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.040978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.041009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.041288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.041323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.041601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.041633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.041912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.041928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.042186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.042203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.042474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.042490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.042731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.189 [2024-12-10 05:04:18.042747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.189 qpair failed and we were unable to recover it. 00:27:27.189 [2024-12-10 05:04:18.042955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.042971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.043186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.043203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.043348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.043364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.043618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.043634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.043812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.043829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.043926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.043941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.044154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.044185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.044432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.044899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.044931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.045125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.045157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.045306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.045339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.045609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.045641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.045821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.046043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.046074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.046355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.046372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.046524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.046541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.046778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.046795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.047057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.047088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.047324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.047359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.047615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.047646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.047949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.047981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.048249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.048283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.048478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.048511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.048777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.048809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.049076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.049092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.049320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.049338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.049602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.049618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.049777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.049794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.050035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.050051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.050286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.050304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.050406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.050422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.050564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.050582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.050748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.050787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.051061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.051093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.051344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.051378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.051658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.051696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.051968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.052000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.052246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.052280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.190 [2024-12-10 05:04:18.052425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.190 [2024-12-10 05:04:18.052462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.190 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.052728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.052744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.052981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.052998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.053223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.053240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.053501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.053518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.053757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.053774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.053977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.053993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.054210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.054227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.054387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.054403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.054568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.054584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.054817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.055064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.055081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.055193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.055476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.055492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.055706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.055739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.055958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.055990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.056186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.056220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.056522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.056555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.056808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.056824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.057004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.057021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.057163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.057185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.057420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.057436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.057603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.057620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.057852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.057884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.058084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.058122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.058387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.058422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.058613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.058645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.058835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.058867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.059066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.059098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.059366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.059400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.059678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.059709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.060022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.060056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.060350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.060385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.191 [2024-12-10 05:04:18.060647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.191 [2024-12-10 05:04:18.060679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.191 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.060964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.060996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.061227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.061262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.061460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.061491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.061691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.061724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.061998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.062031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.062143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.062184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.062458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.062490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.062674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.062690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.062868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.062899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.063181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.063216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.063422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.063454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.063731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.063748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.064014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.064030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.064119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.064134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.064306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.064323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.064477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.064493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.064645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.064686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.065032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.065158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.065204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.065386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.065418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.065692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.065723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.065917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.065949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.066127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.066159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.066377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.066410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.066605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.066637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.066841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.066858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.067015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.067031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.067187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.067205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.067425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.067441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.067610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.067626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.067863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.067894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.068189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.068224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.068427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.068461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.068711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.068743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.068994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.069028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.069228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.069263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.069465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.069497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.069774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.069807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.070004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.192 [2024-12-10 05:04:18.070037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.192 qpair failed and we were unable to recover it. 00:27:27.192 [2024-12-10 05:04:18.070335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.070369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.070545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.070796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.070828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.071093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.071110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.071221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.071238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.071459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.071475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.071725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.071742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.071918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.071935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.072179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.072196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.072432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.072465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.072743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.072761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.072962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.072978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.073197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.073215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.073465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.073482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.073647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.073664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.073910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.073942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.074090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.074121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.074345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.074378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.074648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.074665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.074973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.074990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.075163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.075189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.075372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.075389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.075568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.075601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.075829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.075862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.076065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.076096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.076374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.076409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.076660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.076678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.076778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.076793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.076975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.076991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.077164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.077188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.077435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.077453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.077706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.077723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.077963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.077997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.078202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.078237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.078431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.078466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.078685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.078703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.078930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.079222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.079239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.079473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.079490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.193 qpair failed and we were unable to recover it. 00:27:27.193 [2024-12-10 05:04:18.079661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.193 [2024-12-10 05:04:18.079677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.079894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.079910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.080153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.080177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.080355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.080372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.080471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.080487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.080666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.080682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.080904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.080938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.081142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.081192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.081459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.081492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.081707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.081894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.081926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.082226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.082261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.082372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.082403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.082626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.082642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.082814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.082830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.083020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.083053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.083312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.083346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.083646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.083678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.083983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.084015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.084305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.084338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.084559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.084576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.084805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.084822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.085040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.085057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.085204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.085223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.085464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.085480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.085576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.085592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.085755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.085772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.085954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.085985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.086106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.086138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.086427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.086460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.086606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.086623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.086828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.086844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.086994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.087010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.087229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.087246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.087464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.087503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.087800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.087833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.088029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.088045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.088289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.088306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.088452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.088469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.194 [2024-12-10 05:04:18.088650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.194 [2024-12-10 05:04:18.088667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.194 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.088991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.089023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.089224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.089257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.089445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.089461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.089622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.089639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.089721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.089736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.089972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.089988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.090088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.090105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.090205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.090221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.090377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.090394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.090612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.090650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.090866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.090898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.091076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.091109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.091415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.091450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.091701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.091744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.091859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.092062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.092079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.092301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.092319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.092469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.092485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.092636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.092652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.092837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.092854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.093095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.093113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.093288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.093307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.093436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.093678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.093696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.093914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.093931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.094194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.094211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.094457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.094474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.094644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.094661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.094813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.094850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.095073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.095105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.095349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.095383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.095537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.095569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.095804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.095838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.096095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.096129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.096346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.096380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.096589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.096622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.096840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.096871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.097065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.097082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.097183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.195 [2024-12-10 05:04:18.097200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.195 qpair failed and we were unable to recover it. 00:27:27.195 [2024-12-10 05:04:18.097362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.097379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.097540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.097556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.097724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.097741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.097902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.097920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.098009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.098213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.098310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.098327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.098570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.098588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.098697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.098714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.098883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.098901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.099086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.099104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.099330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.099434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.099449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.099691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.099707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.099960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.099977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.100225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.100242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.100488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.100505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.100688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.100705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.100903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.100936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.101132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.101165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.101388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.101420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.101685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.101718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.101979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.101997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.102162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.102191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.102370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.102390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.102700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.102733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.103004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.103036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.103342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.103376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.103583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.103615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.103755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.103785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.104101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.104133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.104393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.196 [2024-12-10 05:04:18.104426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.196 qpair failed and we were unable to recover it. 00:27:27.196 [2024-12-10 05:04:18.104633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.104665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.104922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.104939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.105114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.105131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.105350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.105370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.105560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.105576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.105800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.105817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.106000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.106017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.106236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.106255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.106365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.106381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.106467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.106482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.106628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.106645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.106892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.107058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.107074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.107263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.107281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.107447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.107463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.107704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.107720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.107961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.108143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.108159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.108290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.108311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.108505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.108522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.108740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.108757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.109005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.109021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.109291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.109308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.109478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.109494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.109653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.109670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.109859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.109876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.110065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.110081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.110266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.110283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.110400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.110683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.110701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.110821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.110837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.110946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.110963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.111142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.111159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.111417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.111435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.111616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.111648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.111908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.111939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.112078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.197 [2024-12-10 05:04:18.112111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.197 qpair failed and we were unable to recover it. 00:27:27.197 [2024-12-10 05:04:18.112399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.112432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.112635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.112656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.112923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.112958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.113137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.113181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.113316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.113349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.113551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.113583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.113918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.113950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.114130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.114147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.114404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.114429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.114650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.114666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.114913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.114929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.115086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.115103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.115370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.115405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.115556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.115589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.115793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.115826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.116105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.116137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.116325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.116360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.116628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.116669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.116899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.116916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.117149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.117173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.117313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.117330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.117517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.117534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.117685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.117701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.117955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.117987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.118318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.118352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.118544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.118577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.118784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.118816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.119002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.119019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.119174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.119222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.119347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.119390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.119529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.119563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.119769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.119802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.120080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.120097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.120275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.120294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.120397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.120414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.120582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.120599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.120749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.120788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.121055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.121086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.121212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.121247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.198 [2024-12-10 05:04:18.121395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.198 [2024-12-10 05:04:18.121428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.198 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.121736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.121990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.122006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.122272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.122290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.122526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.122542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.122727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.122743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.122916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.122949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.123199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.123233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.123503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.123536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.123736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.123752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.123997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.124014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.124176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.124194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.124437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.124470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.124699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.124716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.124915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.124931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.125100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.125116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.125313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.125330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.125491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.125508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.125751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.125768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.125983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.126015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.126275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.126313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.126614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.126945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.126977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.127181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.127215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.127442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.127476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.127693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.127709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.127880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.127897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.128093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.128127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.128341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.128374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.128553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.128586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.128870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.128886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.129045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.129061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.129297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.129331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.129542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.129574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.129757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.129789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.129990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.130008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.130213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.130248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.130529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.130764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.130795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.131050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.199 [2024-12-10 05:04:18.131081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.199 qpair failed and we were unable to recover it. 00:27:27.199 [2024-12-10 05:04:18.131383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.131418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.131653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.131923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.131964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.132183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.132201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.132442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.132459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.132627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.132643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.132871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.132902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.133096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.133129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.133373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.133406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.133636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.133914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.133930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.134018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.134033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.134287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.134305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.134563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.134673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.134690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.134810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.134826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.134990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.135007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.135227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.135245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.135348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.135364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.135464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.135479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.135670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.135687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.135909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.135925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.136013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.136029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.136210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.136227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.136341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.136361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.136559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.136592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.136813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.136844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.137067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.137099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.137338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.137550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.137582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.137835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.137866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.138057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.138074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.138240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.138258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.138426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.200 [2024-12-10 05:04:18.138442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.200 qpair failed and we were unable to recover it. 00:27:27.200 [2024-12-10 05:04:18.138606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.138623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.138866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.138898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.139224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.139429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.139461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.139780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.140050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.140066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.140319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.140337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.140488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.140505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.140653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.140669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.140881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.140898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.141146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.141162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.141352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.141370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.141473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.141490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.141669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.141685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.141872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.141904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.142107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.142142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.142358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.142392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.142571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.142603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.142890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.142923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.143196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.143213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.143379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.143396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.143493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.143509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.143677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.143905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.143937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.144222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.144256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.144484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.144516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.144719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.144750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.145060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.145092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.145344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.145362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.145602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.145619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.145717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.145732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.145849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.145866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.146085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.146101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.146257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.146275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.146440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.146457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.146669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.146686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.146933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.146965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.147211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.147246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.147427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.147458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.201 qpair failed and we were unable to recover it. 00:27:27.201 [2024-12-10 05:04:18.147667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.201 [2024-12-10 05:04:18.147700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.148056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.148089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.148296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.148330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.148537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.148569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.148877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.149047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.149063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.149246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.149280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.149483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.149516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.149716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.149747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.149932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.149949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.150230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.150264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.150470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.150502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.150736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.150942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.150958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.151127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.151143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.151358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.151393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.151599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.151631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.151849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.151882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.152206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.152240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.152443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.152482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.152716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.152749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.153025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.153056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.153256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.153274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.153450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.153467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.153621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.153638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.153902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.153919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.154983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.154999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.155178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.155196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.155386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.155402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.155561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.155578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.155691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.155707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.155875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.155892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.156133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.156175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.202 qpair failed and we were unable to recover it. 00:27:27.202 [2024-12-10 05:04:18.156317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.202 [2024-12-10 05:04:18.156349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.156538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.156571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.156701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.156717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.156887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.156904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.157094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.157111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.157287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.157305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.157457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.157692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.157730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.158029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.158062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.158355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.158390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.158590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.158629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.158777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.158794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.158955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.158994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.159224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.159258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.159465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.159498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.159628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.159660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.159889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.159922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.160061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.160077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.160322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.160340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.160434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.160449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.160677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.160709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.160865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.160897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.161198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.161232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.161475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.161509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.161721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.161736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.161903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.161920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.162014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.162030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.162252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.162269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.162428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.162444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.162711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.162743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.163013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.163045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.163231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.163274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.163431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.163448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.163599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.163616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.163709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.163731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.163830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.163846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.164091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.164108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.164219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.164236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.164455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.203 [2024-12-10 05:04:18.164472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.203 qpair failed and we were unable to recover it. 00:27:27.203 [2024-12-10 05:04:18.164667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.164684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.164781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.164796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.164983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.164999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.165268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.165286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.165397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.165414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.165581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.165598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.165697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.165714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.165907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.165924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.166070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.166087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.166339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.166374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.166633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.166666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.167002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.167041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.167283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.167301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.167520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.167537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.167682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.167699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.167957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.168127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.168144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.168356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.168389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.168576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.168609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.168838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.168869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.169144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.169189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.169347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.169379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.169512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.169544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.169815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.169847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.170048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.170065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.170257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.170274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.170396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.170412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.170560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.170577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.170685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.170701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.170927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.170944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.171183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.171200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.171390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.171406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.171536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.171553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.171854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.171873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.172099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.172133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.172378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.172414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.172624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.172658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.172897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.172929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.173203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.173240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.204 qpair failed and we were unable to recover it. 00:27:27.204 [2024-12-10 05:04:18.173505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.204 [2024-12-10 05:04:18.173540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.173739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.173757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.173999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.174017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.174239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.174258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.174439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.174658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.174691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.174980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.175014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.175162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.175208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.175366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.175400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.175652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.175685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.176003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.176241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.176259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.176381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.176397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.176564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.176583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.176751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.176767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.177051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.177068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.177235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.177254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.177421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.177439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.177630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.177662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.177891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.177926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.178107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.178140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.178273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.178292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.178523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.178541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.178718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.178736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.178902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.179121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.179154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.179632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.179665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.179887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.179920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.180122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.180156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.180325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.180357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.180508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.180541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.205 [2024-12-10 05:04:18.180749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.205 [2024-12-10 05:04:18.180766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.205 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.180863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.180878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.181047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.181064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.181229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.181248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.181350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.181366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.181468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.181483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.181598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.181617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.181779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.181798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.182063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.182081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.182188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.182204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.182298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.182316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.182487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.182506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.182607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.182626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.182887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.182903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.183957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.183974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.184154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.184227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.184437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.184473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.184675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.184707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.184911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.184930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.185302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.185489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.185595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.185817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.185996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.186013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.186283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.186299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.186546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.186563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.186785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.186803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.186906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.186922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.187110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.187129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.187296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.187315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.187430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.187449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.187566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.206 [2024-12-10 05:04:18.187584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.206 qpair failed and we were unable to recover it. 00:27:27.206 [2024-12-10 05:04:18.187671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.187688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.187860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.187876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.188036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.188054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.188298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.188316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.188489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.188507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.188596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.188611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.188714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.188732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.188955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.188972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.189072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.189090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.189315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.189334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.189432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.189449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.189621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.189640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.189735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.189750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.189864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.189881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.190100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.190119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.190212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.190229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.190423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.190441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.190660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.190677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.190869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.190888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.191048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.191066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.191262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.191297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.191434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.191467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.191703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.191906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.191924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.192118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.192136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.192270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.192290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.192458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.192477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.192697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.192715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.192900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.193100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.193119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.193235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.193253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.193369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.193386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.193475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.193492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.193679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.193699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.193824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.193840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.194016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.194033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.194291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.194309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.194473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.194490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.194669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.207 [2024-12-10 05:04:18.194703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.207 qpair failed and we were unable to recover it. 00:27:27.207 [2024-12-10 05:04:18.194964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.194997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.195199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.195364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.195380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.195549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.195566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.195681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.195698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.195841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.195859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.195959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.195978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.196220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.196237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.196476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.196496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.196662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.196679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.196883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.196902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.197201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.197236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.197367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.197399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.197604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.197638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.197921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.197956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.198156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.198200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.198396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.198428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.198637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.198672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.198936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.198970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.199100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.199148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.199364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.199385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.199536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.199553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.199740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.199758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.199919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.199951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.200133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.200201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.200364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.200398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.200537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.200569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.200683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.200716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.200994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.201028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.201161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.201210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.201462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.201480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.201648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.201666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.201962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.201981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.202145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.202304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.202323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.202486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.202509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.202742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.202762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.202938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.202956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.203130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.203161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.208 [2024-12-10 05:04:18.203370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.208 [2024-12-10 05:04:18.203405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.208 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.203638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.203672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.203920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.203937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.204160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.204188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.204354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.204371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.204488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.204505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.204612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.204628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.204734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.204751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.204933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.204965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.205151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.205197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.205342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.205374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.205560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.205592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.205873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.205907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.206127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.206144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.206328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.206346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.206448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.206464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.206563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.206578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.206756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.206774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.206974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.206992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.207187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.207206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.207303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.207319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.207490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.207507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.207678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.207697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.207892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.207924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.208193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.208210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.208383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.208400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.208564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.208600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.208720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.208754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.208937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.209185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.209202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.209395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.209414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.209634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.209651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.209961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.209978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.210154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.210179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.210421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.209 [2024-12-10 05:04:18.210438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.209 qpair failed and we were unable to recover it. 00:27:27.209 [2024-12-10 05:04:18.210539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.210554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.210667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.210684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.210806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.210825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.211053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.211072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.211179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.211197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.211329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.211348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.211508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.211524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.211684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.211702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.211976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.211995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.212230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.212250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.212329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.212344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.212564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.212582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.212673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.212688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.212944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.212962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.213149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.213192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.213388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.213420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.213551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.213585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.213780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.213813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.214919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.214938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.215093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.215111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.215206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.215224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.215396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.215414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.215585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.215602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.215880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.215897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.216023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.216056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.216324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.216360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.216498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.216531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.216668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.216700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.216905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.216938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.217072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.217104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.217397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.217431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.217614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.217647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.217806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.210 [2024-12-10 05:04:18.217839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.210 qpair failed and we were unable to recover it. 00:27:27.210 [2024-12-10 05:04:18.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.218076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.218235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.218270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.218410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.218441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.218588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.218621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.218767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.218801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.218904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.218919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.219087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.219104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.219286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.219304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.219474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.219491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.219579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.219595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.219683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.219698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.219984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.220017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.220224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.220259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.220461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.220494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.220687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.220720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.220959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.220994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.221251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.221271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.221390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.221500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.221517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.221662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.221679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.221933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.221949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.222117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.222381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.222399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.222491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.222507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.222603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.222619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.222727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.222746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.222904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.222921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.223189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.223225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.223382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.223416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.223744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.223777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.224048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.224065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.224195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.224214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.224313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.224328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.224573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.224591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.224696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.224712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.224825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.224841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.225085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.225102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.225293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.225313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.225405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.211 [2024-12-10 05:04:18.225421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.211 qpair failed and we were unable to recover it. 00:27:27.211 [2024-12-10 05:04:18.225498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.225514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.225678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.225694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.225946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.225963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.226188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.226208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.226401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.226421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.226523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.226546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.226658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.226676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.226763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.226778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.226938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.227017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.227263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.227308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.227537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.227572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.227818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.228048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.228082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.228373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.228407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.228600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.228635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.228777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.228798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.228957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.228975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.229967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.230202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.230247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.230503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.230534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.230720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.230751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.231039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.231055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.231239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.231259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.231430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.231447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.231540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.231631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.231646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.231840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.231862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.232028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.232046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.232158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.232185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.232311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.232329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.232441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.212 [2024-12-10 05:04:18.232459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.212 qpair failed and we were unable to recover it. 00:27:27.212 [2024-12-10 05:04:18.232659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.232677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.232966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.232984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.233173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.233190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.233295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.233312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.233523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.233539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.233658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.233675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.233915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.233949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.234098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.234128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.234329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.234363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.234508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.234543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.234693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.234727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.234846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.234880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.235831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.235983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.236096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.236193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.236381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.236641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.236758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.236947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.236964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.237951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.237968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.238050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.238064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.238164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.238199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.238322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.238406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.238423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.238620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.238637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.213 [2024-12-10 05:04:18.238737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.213 [2024-12-10 05:04:18.238755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.213 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.238837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.238854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.238937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.238954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.239950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.239969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.240053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.240070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.240174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.240192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.240278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.240295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.240517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.240610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.240784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.240862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.241081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.241117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.241322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.241359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.241493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.241528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.241775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.241964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.241996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.242983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.242999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.243080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.243098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.243206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.214 [2024-12-10 05:04:18.243224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.214 qpair failed and we were unable to recover it. 00:27:27.214 [2024-12-10 05:04:18.243320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.243337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.243436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.243556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.243574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.243655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.243672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.243937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.243956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.244032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.244047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.244292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.244311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.244421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.244439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.244589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.244606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.244773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.244789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.244939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.244957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.245901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.245919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.246035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.246193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.246211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.246324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.246344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.246506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.246524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.246622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.246638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.246739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.246756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.247005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.247021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.247317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.247335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.247527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.247704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.247735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.247914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.247945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.248210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.248245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.248378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.248395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.248488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.248505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.248666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.248685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.248883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.248902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.249151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.249178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.249409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.249428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.249514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.249530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.249627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.249646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.249930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.249963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.215 qpair failed and we were unable to recover it. 00:27:27.215 [2024-12-10 05:04:18.250282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.215 [2024-12-10 05:04:18.250316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.250569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.250603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.250746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.250779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.250993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.251023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.251224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.251269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.251417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.251435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.251598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.251614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.251783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.251800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.252052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.252072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.252165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.252200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.252410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.252428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.252508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.252527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.252685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.252703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.252971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.253003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.253332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.253370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.253509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.253540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.253677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.253711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.253907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.253941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.254180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.254200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.254466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.254501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.254626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.254658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.255014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.255199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.255217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.255343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.255361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.255528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.255545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.255711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.255730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.256042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.256060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.256223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.256243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.256353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.256369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.256551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.256569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.256671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.256690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.256951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.256969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.257140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.257158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.257295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.257312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.257506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.257525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.257681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.257703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.257816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.257833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.258055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.258072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.258319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.258338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.258487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.216 [2024-12-10 05:04:18.258506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.216 qpair failed and we were unable to recover it. 00:27:27.216 [2024-12-10 05:04:18.258616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.258634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.258809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.258826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.259009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.259044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.259273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.259308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.259515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.259547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.259812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.259845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.259994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.260028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.260222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.260239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.260348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.260365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.260607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.260624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.260734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.260750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.260923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.260941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.261115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.261132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.261327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.261344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.261446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.261463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.261633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.261650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.261770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.262076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.262094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.262296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.262315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.262495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.262512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.262691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.262725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.262947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.262981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.263234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.263271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.263476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.263495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.263652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.263668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.263832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.263874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.263992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.264025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.264298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.264333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.264465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.264497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.264751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.264785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.264986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.265180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.265200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.265351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.265368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.265456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.265473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.265662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.265679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.265788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.265805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.265934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.265955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.266107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.266123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.217 [2024-12-10 05:04:18.266297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.217 [2024-12-10 05:04:18.266314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.217 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.266484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.266502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.266623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.266640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.266859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.266877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.267160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.267195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.267297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.267315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.267459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.267476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.267645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.267664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.267831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.267848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.268098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.268115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.268288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.268305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.268414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.268431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.268542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.268560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.268726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.268744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.268824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.268839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.269057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.269075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.269240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.269258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.269374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.269391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.269541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.269557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.269670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.269687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.269904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.269920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.270021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.270039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.270142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.270159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.270291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.270307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.270469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.270485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.270865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.270888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.271131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.271148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.271279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.271297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.271537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.271739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.271771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.271969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.271986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.272109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.272126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.272345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.272366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.272536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.272812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.272828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.273029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.273047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.273246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.218 [2024-12-10 05:04:18.273266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.218 qpair failed and we were unable to recover it. 00:27:27.218 [2024-12-10 05:04:18.273456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.273490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.273688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.273721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.274006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.274038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.274219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.274237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.274440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.274457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.274631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.274650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.274887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.274905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.275069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.275086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.275341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.275376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.275589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.275622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.275742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.275773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.276075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.276108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.276259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.276294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.276421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.276438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.276531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.276548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.276727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.276747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.276923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.276939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.277112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.277131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.277325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.277344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.277453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.277470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.277584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.277601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.277799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.277842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.278047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.278081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.278338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.278388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.278547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.278565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.278644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.278659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.278850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.278868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.279044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.279063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.279147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.279161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.279347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.279365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.279539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.219 [2024-12-10 05:04:18.279555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.219 qpair failed and we were unable to recover it. 00:27:27.219 [2024-12-10 05:04:18.279779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.279812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.279998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.280029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.280301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.280335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.280494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.280526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.280727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.280760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.280892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.280926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.281125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.281157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.281402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.281420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.281506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.281523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.281639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.281656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.281848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.281866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.282906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.282922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.283006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.283021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.283199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.283217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.283340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.283356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.283516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.283533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.283635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.283652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.283985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.284017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.284206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.284242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.284410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.284443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.284569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.284601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.284836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.284868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.284984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.285019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.285228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.285262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.285390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.285425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.285605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.285622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.285771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.285813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.286004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.286038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.286244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.286281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.286460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.286479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.286599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.286617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.286832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.286849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.287018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.287036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.220 [2024-12-10 05:04:18.287193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.220 [2024-12-10 05:04:18.287211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.220 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.287320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.287339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.287556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.287573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.287740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.287756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.287935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.288120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.288154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.288323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.288356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.288608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.288642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.288903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.288935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.289070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.289101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.289213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.289246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.289384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.289400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.289549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.289566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.289674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.289937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.289954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.290132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.290254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.290271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.290520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.290536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.290622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.290637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.290843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.290861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.291072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.291089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.291193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.291210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.291381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.291397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.291501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.291517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.291627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.291643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.291831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.291863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.292014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.292045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.292308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.292346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.292557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.292574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.292754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.292770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.293003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.293035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.293234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.293269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.293453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.293498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.293657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.293674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.293856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.293873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.293988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.294004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.294223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.294242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.294357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.294374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.294501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.294517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.294618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.221 [2024-12-10 05:04:18.294635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.221 qpair failed and we were unable to recover it. 00:27:27.221 [2024-12-10 05:04:18.294713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.294734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.294911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.294929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.222 [2024-12-10 05:04:18.295969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.222 [2024-12-10 05:04:18.295987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.222 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.296157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.296185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.296339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.296357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.296469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.296487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.296596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.296614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.296770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.296788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.297060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.297094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.297222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.297258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.297465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.297498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.297713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.297745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.297946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.297979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.298106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.298147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.298279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.298297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.298447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.298462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.298567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.298583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.298680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.298696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.298918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.298936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.299053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.299158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.299358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.299496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.299685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.299801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.299986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.300002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.300229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.300249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.300422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.300441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.520 [2024-12-10 05:04:18.300530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.520 [2024-12-10 05:04:18.300545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.520 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.300699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.300716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.300890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.300906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.301895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.301910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.302969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.302989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.303207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.303226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.303391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.303408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.303596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.303613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.303714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.303731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.303911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.303928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.304086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.304104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.304213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.304229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.304324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.304342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.304460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.304477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.304741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.304758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.305001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.305017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.305238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.305258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.305423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.305440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.305629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.305662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.305926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.305958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.306092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.306124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.306360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.306400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.306604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.306636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.306839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.306872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.307078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.307111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.307334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.307368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.307637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.521 [2024-12-10 05:04:18.307655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.521 qpair failed and we were unable to recover it. 00:27:27.521 [2024-12-10 05:04:18.307784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.307802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.308041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.308075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.308285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.308319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.308478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.308512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.308763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.308796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.309066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.309098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.309325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.309361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.309497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.309529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.309680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.309715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.309940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.309975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.310905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.310920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.311917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.311933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.312184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.312202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.312329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.312346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.312550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.312799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.312816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.313062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.313079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.313252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.313272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.313366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.313383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.313634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.313652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.313882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.313898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.313996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.314013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.314184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.314202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.314356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.314374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.522 [2024-12-10 05:04:18.314622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.522 [2024-12-10 05:04:18.314640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.522 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.314739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.314755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.314944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.314962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.315180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.315197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.315417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.315436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.315627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.315645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.315765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.315784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.315944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.315960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.316057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.316073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.316181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.316201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.316302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.316325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.316477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.316497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.316591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.316610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.316707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.316724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.317007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.317025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.317198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.317216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.317322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.317342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.317506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.317522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.317640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.317656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.317876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.317910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.318046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.318078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.318312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.318346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.318554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.318587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.318837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.318854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.319089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.319108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.319255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.319273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.319427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.319445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.319666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.319683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.319897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.319914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.320087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.320104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.320325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.320343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.320597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.320629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.320834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.320868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.321125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.321157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.321465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.321508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.321753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.523 [2024-12-10 05:04:18.321771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.523 qpair failed and we were unable to recover it. 00:27:27.523 [2024-12-10 05:04:18.321927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.321943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.322193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.322227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.322447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.322480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.322616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.322649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.322799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.322832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.323016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.323049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.323240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.323276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.323556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.323590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.323822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.323857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.324071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.324104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.324251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.324270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.324504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.324522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.324686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.324702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.324924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.325078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.325095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.325266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.325284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.325398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.325415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.325523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.325541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.325721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.325738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.325984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.326001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.326154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.326181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.326264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.326281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.326520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.326539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.326637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.326656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.326887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.326905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.327072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.327089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.327194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.327212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.327330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.327349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.327523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.327569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.327802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.327837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.328022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.328056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.328339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.524 [2024-12-10 05:04:18.328356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.524 qpair failed and we were unable to recover it. 00:27:27.524 [2024-12-10 05:04:18.328459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.328478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.328716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.328733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.328973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.328990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.329161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.329198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.329324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.329423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.329682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.329701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.329910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.329943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.330085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.330120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.330259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.330294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.330561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.330600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.330802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.330966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.330998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.331194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.331229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.331456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.331488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.331615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.331634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.331875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.331892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.332040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.332057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.332249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.332269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.332446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.332464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.332609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.332626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.332877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.332896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.333084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.333102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.333337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.333356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.333451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.333468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.333556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.333571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.333726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.333743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.333997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.334016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.334232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.334250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.334433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.334476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.334678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.334711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.334978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.335010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.335184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.335203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.335471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.335490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.335603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.335620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.525 [2024-12-10 05:04:18.335719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.525 [2024-12-10 05:04:18.335735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.525 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.336042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.336059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.336240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.336262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.336460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.336477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.336717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.336735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.336895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.336912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.337121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.337155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.337291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.337327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.337527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.337562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.337688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.337724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.337926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.337959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.338187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.338225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.338373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.338404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.338595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.338612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.338759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.338776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.338896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.338912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.339923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.339940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.340914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.340929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.341836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.341851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.342040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.342057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.342160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.342241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.342345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.342361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.526 qpair failed and we were unable to recover it. 00:27:27.526 [2024-12-10 05:04:18.342441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.526 [2024-12-10 05:04:18.342456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.342554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.342570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.342643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.342658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.342770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.342788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.342875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.342891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.343178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.343199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.343282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.343297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.343547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.343564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.343899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.343917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.344145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.344162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.344280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.344295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.344413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.344679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.344699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.344910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.344930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.345113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.345131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.345269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.345288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.345513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.345530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.345684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.345702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.345927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.345960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.346154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.346179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.346297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.346313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.346562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.346580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.346761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.346778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.346955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.346971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.347073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.347091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.347194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.347210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.347392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.347409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.347561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.347580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.347679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.347694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.347900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.347919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.348132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.348153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.348380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.348400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.348559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.348578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.348746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.348781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.348906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.348938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.349222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.349258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.349481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.349590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.349608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.349724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.527 [2024-12-10 05:04:18.349741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.527 qpair failed and we were unable to recover it. 00:27:27.527 [2024-12-10 05:04:18.349914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.349931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.350112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.350131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.350234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.350251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.350442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.350459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.350578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.350597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.350762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.350779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.350871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.350889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.351056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.351074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.351152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.351175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.351435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.351452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.351571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.351588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.351825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.351843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.352025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.352041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.352221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.352239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.352401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.352420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.352598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.352615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.352771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.352810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.353031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.353064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.353252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.353293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.353548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.353565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.353685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.353702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.353935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.353969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.354151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.354198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.354384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.354417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.354553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.354586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.354717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.354751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.354945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.354978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.355219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.355256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.355471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.355506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.355662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.355680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.355771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.355787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.355965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.355981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.356501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.356682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.356701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.356951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.356968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.357191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.357209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.357377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.357393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.357493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.357508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.357634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.357652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.528 [2024-12-10 05:04:18.357984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.528 [2024-12-10 05:04:18.358000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.528 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.358240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.358258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.358360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.358376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.358545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.358562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.358658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.358674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.358797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.358814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.358964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.358985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.359972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.359988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.360957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.360973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.361131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.361151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.361345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.361365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.361475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.361495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.361587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.361602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.361759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.361776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.361938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.361956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.362055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.362071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.362158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.362184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.362285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.362303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.362454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.362474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.362565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.362580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.529 [2024-12-10 05:04:18.362692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.529 [2024-12-10 05:04:18.362708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.529 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.362816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.362833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.362926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.362942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.363111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.363381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.363490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.363616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.363728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.363833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.363990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.364970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.364987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.365102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.365117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.365222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.365240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.365417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.365432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.365617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.365632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.365751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.365768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.366887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.366903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.367929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.367945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.368027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.368042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.368151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.368176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.368272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.530 [2024-12-10 05:04:18.368288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.530 qpair failed and we were unable to recover it. 00:27:27.530 [2024-12-10 05:04:18.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.368475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.368568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.368584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.368811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.368832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.369052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.369070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.369340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.369359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.369528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.369545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.369695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.369712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.369821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.369839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.370049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.370066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.370232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.370251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.370445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.370568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.370734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.370750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.370920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.370936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.371145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.371207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.371413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.371446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.371639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.371673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.371874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.371908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.372162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.372189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.372362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.372379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.372531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.372568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.372863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.372898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.373124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.373160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.373401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.373438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.373581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.373620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.373794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.373812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.373989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.374912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.374929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.375178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.375197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.375315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.375332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.375503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.375521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.531 [2024-12-10 05:04:18.375758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.531 [2024-12-10 05:04:18.375792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.531 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.376077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.376111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.376388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.376405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.376567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.376584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.376734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.376752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.376841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.376857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.377106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.377142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.377349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.377365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.377522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.377655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.377756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.377773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.377993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.378009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.378289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.378307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.378409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.378425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.378534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.378551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.378718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.378962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.378978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.379136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.379154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.379285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.379304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.379469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.379486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.379617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.379789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.379807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.379959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.379977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.380226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.380262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.380480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.380513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.380733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.380766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.380965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.381000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.381109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.381157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.381426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.381444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.381560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.381577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.381734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.381752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.381919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.381937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.382030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.382046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.382204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.382224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.382463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.382481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.382654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.382687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.383012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.383045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.383247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.383284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.383523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.383542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.532 [2024-12-10 05:04:18.383757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.532 [2024-12-10 05:04:18.383776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.532 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.383958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.383975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.384140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.384159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.384342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.384375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.384628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.384661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.384842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.384877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.385084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.385402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.385420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.385520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.385537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.385727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.385743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.385928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.385946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.386096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.386112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.386304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.386322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.386524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.386542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.386800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.386817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.386932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.386949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.387102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.387119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.387319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.387338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.387585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.387604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.387715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.387733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.387933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.387950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.388184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.388206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.388374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.388392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.388492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.388510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.388609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.388628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.388715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.388731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.388887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.388903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.389068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.389111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.389305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.389339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.389537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.389572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.389768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.389785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.389881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.389897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.390138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.390156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.390315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.390332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.390508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.390525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.390699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.390717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.390921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.390956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.391320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.391518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.391550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.391832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.533 [2024-12-10 05:04:18.391850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.533 qpair failed and we were unable to recover it. 00:27:27.533 [2024-12-10 05:04:18.392000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.392208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.392317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.392458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.392583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.392685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.392820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.392838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.393958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.393975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.394203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.394222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.394330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.394346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.394468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.394485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.394642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.394659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.394828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.394847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.395943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.395960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.396196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.396217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.396322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.396337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.396416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.396431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.396596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.396613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.396783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.396801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.397072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.397106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.397245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.397281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.397495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.397530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.534 qpair failed and we were unable to recover it. 00:27:27.534 [2024-12-10 05:04:18.397733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.534 [2024-12-10 05:04:18.397751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.397944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.397965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.398198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.398218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.398329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.398344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.398453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.398654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.398689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.399045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.399078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.399335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.399374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.399622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.399654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.399777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.399819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.400072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.400089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.400290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.400308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.400413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.400430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.400535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.400554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.400644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.400661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.400965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.400982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.401205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.401225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.401355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.401460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.401477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.401638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.401751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.401923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.401940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.402034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.402050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.402199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.402216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.402383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.402400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.402573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.402590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.402695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.402995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.403014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.403213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.403233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.403335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.403353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.403529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.403547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.403709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.403728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.403891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.403909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.404102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.404137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.404379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.404412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.404543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.404562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.404661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.404678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.404956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.404973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.405066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.405083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.535 qpair failed and we were unable to recover it. 00:27:27.535 [2024-12-10 05:04:18.405329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.535 [2024-12-10 05:04:18.405348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.405589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.405608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.405691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.405706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.405893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.405910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.406104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.406121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.406369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.406388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.406593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.406629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.406753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.406985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.407018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.407294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.407334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.407485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.407501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.407579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.407596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.407772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.407788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.407888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.407903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.408123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.408140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.408406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.408424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.408575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.408594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.408717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.408734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.408907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.408925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.409093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.409109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.409376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.409395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.409607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.409686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.409701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.409854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.409871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.410027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.410043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.410216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.410235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.410386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.410403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.410664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.410682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.410855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.410873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.411062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.411096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.411242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.411282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.411429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.411464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.411592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.411628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.411781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.411798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.411960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.411976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.412055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.412071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.412270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.412289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.412404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.412424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.412617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.412634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.412886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.536 [2024-12-10 05:04:18.412922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.536 qpair failed and we were unable to recover it. 00:27:27.536 [2024-12-10 05:04:18.413206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.413242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.413448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.413481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.413632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.413648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.413751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.413769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.413918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.413935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.414101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.414120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.414287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.414306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.414406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.414423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.414592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.414609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.414712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.414729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.415972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.415991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.416252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.416272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.416446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.416478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.416678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.416711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.417002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.417036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.417234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.417271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.417598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.417631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.417727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.417898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.417915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.418017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.418035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.418208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.418399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.418416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.418577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.418594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.418812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.418829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.418940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.418956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.419070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.419089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.419376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.419547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.419563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.419681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.419698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.419864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.419881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.419990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.420008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.420241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.420261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.420442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.537 [2024-12-10 05:04:18.420460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.537 qpair failed and we were unable to recover it. 00:27:27.537 [2024-12-10 05:04:18.420642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.420659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.420963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.420997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.421293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.421331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.421517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.421534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.421704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.421721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.421981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.421999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.422158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.422192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.422416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.422433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.422601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.422618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.422726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.422743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.422925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.422942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.423125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.423141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.423310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.423330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.423557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.423574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.423691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.423708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.423885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.423902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.424065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.424098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.424354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.424390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.424637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.424669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.424951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.424983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.425198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.425234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.425439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.425473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.425702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.425910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.425927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.426111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.426128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.426353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.426372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.426495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.426514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.426730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.426748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.426927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.426943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.427073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.427092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.427247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.427265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.427378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.427395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.427577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.427596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.427750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.538 qpair failed and we were unable to recover it. 00:27:27.538 [2024-12-10 05:04:18.427913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.538 [2024-12-10 05:04:18.427931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.428197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.428216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.428385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.428402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.428660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.428694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.428902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.428936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.429227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.429264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.429412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.429444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.429602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.429634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.429783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.429800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.429955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.429972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.430068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.430083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.430201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.430219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.430383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.430406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.430558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.430574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.430860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.430896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.431118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.431149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.431395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.431438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.431545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.431564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.431658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.431675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.431780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.431797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.431970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.431987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.432155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.432180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.432475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.432508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.432765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.432798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.432919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.432951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.433134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.433182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.433451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.433484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.433608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.433625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.433756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.433776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.433945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.433962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.434109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.434127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.434376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.434418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.434567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.434597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.434738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.434769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.434946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.434965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.435206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.435222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.435441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.435461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.435636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.435653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.435921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.435938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.436128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.539 [2024-12-10 05:04:18.436148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.539 qpair failed and we were unable to recover it. 00:27:27.539 [2024-12-10 05:04:18.436396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.436414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.436522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.436538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.436710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.436728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.436925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.436942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.437164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.437189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.437399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.437417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.437578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.437595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.437824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.437842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.438958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.438975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.439973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.439989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.440144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.440162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.440255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.440273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.440434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.440452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.440624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.440641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.440755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.440777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.441024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.441042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.441153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.441336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.441353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.441543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.441560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.441806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.441823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.441976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.442197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.442215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.442384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.442403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.442643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.442662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.442924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.442941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.443106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.443125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.443274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.443291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.540 qpair failed and we were unable to recover it. 00:27:27.540 [2024-12-10 05:04:18.443403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.540 [2024-12-10 05:04:18.443419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.443607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.443645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.443848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.443883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.444082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.444116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.444244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.444278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.444510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.444543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.444742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.444779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.445023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.445042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.445197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.445216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.445331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.445348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.445520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.445537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.445694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.445711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.445947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.445966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.446129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.446147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.446323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.446341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.446582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.446599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.446912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.446930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.447151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.447178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.447336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.447354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.447456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.447472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.447562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.447578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.447698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.447717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.447816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.447832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.448009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.448026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.448190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.448210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.448445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.448477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.448676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.448711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.448920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.448954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.449240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.449277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.449552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.449584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.449794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.449826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.449989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.450023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.450304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.450340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.450616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.450659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.450947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.450966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.451191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.451210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.451316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.451332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.451428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.451445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.541 [2024-12-10 05:04:18.451617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.541 [2024-12-10 05:04:18.451635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.541 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.451727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.451743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.451920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.451939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.452089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.452106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.452291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.452311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.452422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.452439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.452590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.452609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.452832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.452850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.453091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.453109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.453293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.453312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.453407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.453423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.453601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.453789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.453806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.454032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.454051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.454266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.454283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.454407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.454426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.454581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.454598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.454734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.454954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.454973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.455191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.455229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.455500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.455532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.455724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.455764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.455886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.455905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.456123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.456140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.456336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.456357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.456507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.456525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.456629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.456644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.456794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.456813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.456978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.456994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.457215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.457234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.457332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.457348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.457545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.457565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.457669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.457685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.457929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.457945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.458023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.458040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.458196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.458214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.458403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.458420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.458579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.542 [2024-12-10 05:04:18.458598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.542 qpair failed and we were unable to recover it. 00:27:27.542 [2024-12-10 05:04:18.458695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.458711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.458957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.458975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.459124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.459142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.459378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.459395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.459544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.459561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.459659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.459675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.459846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.459866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.460072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.460090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.460276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.460294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.460420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.460437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.460710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.460728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.460834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.460850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.461092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.461109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.461219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.461237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.461384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.461401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.461501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.461518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.461675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.461692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.461868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.461887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.462041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.462058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.462267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.462301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.462594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.462629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.462820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.462840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.463085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.463118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.463304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.463337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.463472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.463507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.463658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.463675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.463978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.464011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.464223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.464259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.464491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.464509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.464762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.464781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.465005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.465038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.465231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.465267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.465452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.465484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.465645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.543 [2024-12-10 05:04:18.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.543 qpair failed and we were unable to recover it. 00:27:27.543 [2024-12-10 05:04:18.465882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.465917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.466050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.466085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.466291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.466327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.466586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.466606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.466708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.466723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.466923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.466941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.467089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.467108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.467299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.467319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.467489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.467505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.467713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.467969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.468002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.468194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.468230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.468444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.468478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.468673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.468693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.468871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.468888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.469058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.469074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.469268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.469288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.469447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.469463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.469663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.469696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.469919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.469952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.470152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.470199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.470347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.470364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.470608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.470628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.470712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.470727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.470907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.470924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.471089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.471106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.471343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.471362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.471455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.471472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.471588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.471768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.471786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.471940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.471958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.472893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.472910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.473080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.473123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.544 qpair failed and we were unable to recover it. 00:27:27.544 [2024-12-10 05:04:18.473325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.544 [2024-12-10 05:04:18.473359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.473615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.473877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.473910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.474195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.474231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.474437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.474472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.474667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.474702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.474823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.474842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.474946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.474961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.475086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.475102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.475334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.475351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.475453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.475470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.475655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.475674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.475858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.475875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.476100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.476118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.476291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.476309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.476477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.476495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.476610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.476626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.476809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.476826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.476919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.476936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.477158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.477184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.477270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.477286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.477398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.477414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.477517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.477535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.477748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.477765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.477917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.477935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.478085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.478101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.478354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.478374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.478503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.478523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.478620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.478640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.478806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.478822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.478983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.479111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.479268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.479451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.479580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.479693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.479804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.479820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.480119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.480135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.480321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.480339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.480424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.545 [2024-12-10 05:04:18.480440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.545 qpair failed and we were unable to recover it. 00:27:27.545 [2024-12-10 05:04:18.480542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.480558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.480733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.480750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.480842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.480858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.481019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.481037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.481260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.481301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.481512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.481544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.481772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.481807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.482973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.483013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.483183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.483203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.483330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.483346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.483528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.483668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.483686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.483956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.483973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.484152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.484428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.484464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.484709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.484910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.484927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.485079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.485098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.485265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.485283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.485456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.485473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.485639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.485656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.485831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.485848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.485941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.485959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.486211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.486288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.486304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.486455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.486474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.486603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.486620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.486868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.486887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.487036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.487055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.487232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.487251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.487343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.487358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.487487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.487505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.487675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.487692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.487885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.487903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.488064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.488082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.488199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.488217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.488409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.488426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.488600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.488617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.488893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.488928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.546 [2024-12-10 05:04:18.489049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.546 [2024-12-10 05:04:18.489082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.546 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.489272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.489308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.489447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.489482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.489579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.489596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.489699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.489714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.489935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.489954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.490198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.490218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.490341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.490357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.490515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.490531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.490698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.490717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.490897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.490916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.491079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.491096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.491359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.491377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.491458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.491474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.491724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.491743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.491968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.492000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.492217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.492250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.492411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.492446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.492657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.492958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.492976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.493134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.493152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.493338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.493356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.493467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.493486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.493653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.493669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.493851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.493871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.494113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.494131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.494304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.494322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.494438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.494455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.494615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.494633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.494916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.494933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.495105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.495125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.495326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.495344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.495515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.495532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.495655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.495673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.495838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.547 [2024-12-10 05:04:18.495857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.547 qpair failed and we were unable to recover it. 00:27:27.547 [2024-12-10 05:04:18.496008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.496181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.496311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.496451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.496581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.496713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.496843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.496858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.497098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.497114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.497274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.497291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.497464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.497485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.497664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.497696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.497918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.497955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.498285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.498431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.498465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.498601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.498633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.498869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.498888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.499121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.499155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.499379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.499413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.499574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.499608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.499752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.499771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.499937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.499955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.500105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.500122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.500292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.500311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.500435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.500456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.500575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.500592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.500712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.500728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.500842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.500858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.501074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.501092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.501248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.501267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.501371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.501387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.501564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.501581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.501737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.501755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.502013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.502029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.502292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.502312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.502436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.502455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.502567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.502585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.502734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.502754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.502912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.502931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.503015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.503030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.548 qpair failed and we were unable to recover it. 00:27:27.548 [2024-12-10 05:04:18.503229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.548 [2024-12-10 05:04:18.503246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.503340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.503356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.503525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.503543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.503709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.503724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.503923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.503940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.504208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.504226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.504407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.504428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.504538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.504560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.504652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.504669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.504780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.504800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.504978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.504999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.505105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.505124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.505231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.505254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.505408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.505444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.505630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.505650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.506713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.506751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.506948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.507225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.507245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.507424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.507442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.507562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.507596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.507829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.507864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.508011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.508047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.508305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.508343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.508534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.508552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.508650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.508669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.508774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.508792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.508984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.509004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.509153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.509198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.509317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.509336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.509454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.509473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.509639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.509659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.509959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.509993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.510302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.510340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.510475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.510510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.510708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.510729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.510912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.510950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.511108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.511144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.511292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.511328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.511540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.511575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.549 qpair failed and we were unable to recover it. 00:27:27.549 [2024-12-10 05:04:18.511723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.549 [2024-12-10 05:04:18.511758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.511913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.511933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.512028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.512045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.512284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.512305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.512501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.512520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.512738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.512758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.512920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.512942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.513192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.513212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.513319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.513337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.513522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.513543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.513654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.513671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.513847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.513866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.514044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.514080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.514294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.514331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.514541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.514575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.514762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.514798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.514939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.514991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.515205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.515226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.515337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.515357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.515529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.515547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.515770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.515789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.515991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.516012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.516204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.516223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.516345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.516381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.516527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.516562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.516868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.516902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.517102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.517127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.517307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.517327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.517442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.517463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.517570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.517592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.517748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.517767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.517944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.517963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.518209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.518230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.518526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.518562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.518709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.518743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.519013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.519047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.519251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.519288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.519426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.519461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.519597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.519632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.550 qpair failed and we were unable to recover it. 00:27:27.550 [2024-12-10 05:04:18.519779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.550 [2024-12-10 05:04:18.519798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.520031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.520067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.520373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.520409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.520609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.520644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.520903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.520939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.521068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.521104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.521315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.521352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.521521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.521558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.521870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.521904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.522124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.522144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.522354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.522374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.522538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.522557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.522737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.522757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.522985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.523005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.523182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.523206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.523433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.523470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.523748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.523783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.524033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.524069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.524209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.524248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.524452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.524487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.524701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.524735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.525039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.525073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.525337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.525373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.525594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.525629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.525779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.525815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.526127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.526390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.526425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.526649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.526686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.526839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.526874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.527064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.527084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.527242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.527284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.527433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.527467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.527614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.527649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.527904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.527948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.528097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.528116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.528341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.528361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.528531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.528549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.528718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.528739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.528910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.528931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.551 [2024-12-10 05:04:18.529103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.551 [2024-12-10 05:04:18.529138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.551 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.529370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.529406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.529663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.529705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.530023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.530058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.530342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.530380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.530589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.530624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.530895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.530936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.531105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.531125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.531289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.531325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.531600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.531637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.531835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.531869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.532108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.532126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.532226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.532244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.532413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.532431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.532553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.532588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.532817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.532852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.533190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.533226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.533411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.533446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.533641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.533675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.533877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.533897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.534077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.534111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.534243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.534280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.534468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.534502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.534731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.534765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.535011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.535030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.535207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.535228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.535393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.535411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.535657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.535691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.535999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.536033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.536237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.536256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.536353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.536370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.536555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.536590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.536796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.552 [2024-12-10 05:04:18.536832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.552 qpair failed and we were unable to recover it. 00:27:27.552 [2024-12-10 05:04:18.537157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.537227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.537488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.537528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.537752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.537787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.538056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.538075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.538235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.538479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.538497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.538625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.538645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.538888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.538908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.539072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.539092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.539302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.539340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.539561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.539597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.539806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.539841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.540024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.540043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.540294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.540331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.540543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.540577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.540859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.540893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.541114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.541147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.541355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.541391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.541624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.541658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.541839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.541872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.542125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.542158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.542471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.542507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.542779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.542813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.542996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.543030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.543216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.543252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.543387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.543420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.543645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.543679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.543908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.544158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.544185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.544341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.544361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.544655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.544690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.544967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.545001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.545211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.545247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.545370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.545403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.545653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.545687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.545978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.546012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.546318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.546609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.546649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.546951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.546986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.553 qpair failed and we were unable to recover it. 00:27:27.553 [2024-12-10 05:04:18.547245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.553 [2024-12-10 05:04:18.547281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.547580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.547613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.547762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.547797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.548075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.548108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.548313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.548349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.548532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.548566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.548850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.548883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.549149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.549196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.549320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.549354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.549557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.549591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.549748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.549767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.549860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.549877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.550028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.550047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.550291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.550327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.550552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.550587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.550773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.550808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.551069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.551087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.551202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.551222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.551463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.551483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.551633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.551652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.551841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.551860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.552030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.552049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.552290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.552309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.552463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.552481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.552672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.552706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.552895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.552942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.553229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.553264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.553470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.553504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.553651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.553684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.553901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.553935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.554077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.554121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.554277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.554297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.554400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.554417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.554601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.554620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.554817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.554836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.555003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.555022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.555177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.555223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.555420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.555454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.555710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.555745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.555947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.554 [2024-12-10 05:04:18.555982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.554 qpair failed and we were unable to recover it. 00:27:27.554 [2024-12-10 05:04:18.556243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.556263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.556439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.556473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.556623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.556642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.556796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.556815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.557057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.557075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.557303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.557323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.557415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.557432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.557687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.557727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.557938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.557972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.558248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.558285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.558486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.558521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.558740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.558775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.559044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.559062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.559247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.559267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.559516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.559551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.559732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.559765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.559894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.559930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.560106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.560125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.560277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.560297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.560445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.560464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.560614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.560632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.560860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.560896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.561213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.561249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.561517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.561552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.561770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.561815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.561984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.562003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.562159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.562222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.562425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.562460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.562682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.562716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.562921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.562955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.563132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.563151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.563397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.563433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.563701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.563734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.564015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.564034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.564222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.564241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.564398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.564417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.564600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.564635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.564915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.564948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.565225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.565262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.555 [2024-12-10 05:04:18.565443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.555 [2024-12-10 05:04:18.565477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.555 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.565687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.565721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.565943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.565978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.566176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.566195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.566382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.566416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.566595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.566628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.566899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.566918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.567160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.567204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.567478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.567512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.567794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.567827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.568115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.568150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.568421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.568457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.568670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.568705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.568960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.568998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.569182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.569206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.569400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.569419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.569704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.569738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.569947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.569981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.570245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.570281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.570461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.570496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.570772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.570805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.571081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.571100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.571248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.571268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.571511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.571545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.571747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.571781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.572030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.572049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.572205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.572225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.572504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.572538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.572724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.572838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.572855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.573076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.573095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.573347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.573366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.573616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.573635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.573874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.573914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.556 [2024-12-10 05:04:18.574179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.556 [2024-12-10 05:04:18.574215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.556 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.574510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.574545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.574840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.574874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.575063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.575082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.575236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.575256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.575414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.575432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.575668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.575823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.575863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.576078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.576113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.576325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.576362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.576558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.576592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.576869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.576904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.577157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.577205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.577498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.577533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.577722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.577756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.578034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.578053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.578271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.578290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.578441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.578460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.578577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.578596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.578834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.578853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.578957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.578976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.579209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.579230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.579328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.579344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.579523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.579541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.579725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.579743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.579861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.579880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.580106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.580125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.580293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.580312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.580483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.580501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.580682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.580716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.580902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.580936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.581190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.581225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.581478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.581513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.581794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.581838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.581999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.582193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.582231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.582432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.582466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.582663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.582699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.582949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.557 [2024-12-10 05:04:18.582967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.557 qpair failed and we were unable to recover it. 00:27:27.557 [2024-12-10 05:04:18.583143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.583162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.583392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.583412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.583665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.583700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.583949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.583984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.584183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.584202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.584424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.584444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.584618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.584653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.584859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.584895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.585112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.585145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.585444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.585481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.585741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.585776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.586021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.586066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.586181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.586201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.586442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.586461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.586647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.586665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.586908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.586927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.587218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.587256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.587541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.587575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.587772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.587807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.588100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.588119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.588338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.588357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.588471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.588491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.588660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.588679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.588764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.588781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.588963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12600f0 is same with the state(6) to be set 00:27:27.558 [2024-12-10 05:04:18.589342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.589424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.589645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.589684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.589902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.589925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.590154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.590180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.590338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.590356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.590518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.590552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.590827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.590861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.591118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.591138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.591391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.591412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.591600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.591619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.591837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.591856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.592022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.592270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.592307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.592443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.592477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.558 [2024-12-10 05:04:18.592848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.558 qpair failed and we were unable to recover it. 00:27:27.558 [2024-12-10 05:04:18.593108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.593142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.593453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.593744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.593778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.594017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.594205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.594241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.594516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.594549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.594822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.594856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.595039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.595078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.595174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.595191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.595337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.595383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.595583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.595624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.595813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.596037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.596057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.596278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.596298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.596481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.596516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.596702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.596736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.596925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.596959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.597137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.597155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.597354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.597390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.597665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.597698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.597900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.597918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.598068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.598114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.598332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.598368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.598577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.598611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.598797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.598816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.599122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.599141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.599367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.599386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.599551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.599570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.599814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.599848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.600036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.600070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.600258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.600295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.600440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.600474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.600694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.600727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.600930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.600948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.601190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.601210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.601379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.601398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.601543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.601562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.601783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.601808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.602010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.559 [2024-12-10 05:04:18.602044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.559 qpair failed and we were unable to recover it. 00:27:27.559 [2024-12-10 05:04:18.602330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.602367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.602663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.602696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.602828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.602862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.603064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.603083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.603332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.603369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.603641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.603675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.603959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.603977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.604158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.604183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.604401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.604420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.604644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.604663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.604902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.604921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.605019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.605036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.605298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.605319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.605538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.605555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.605801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.605820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.606040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.606058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.606215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.606236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.606454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.606472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.606619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.606638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.606893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.606926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.607226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.607261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.607528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.607562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.607842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.607877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.608162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.608205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.608479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.608514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.608653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.608692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.608900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.608935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.609219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.609256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.609539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.609575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.609772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.609805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.610069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.610087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.610200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.610217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.610388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.610406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.610634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.610668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.610985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.611019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.611274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.611293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.611440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.611458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.611605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.611623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.611810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.560 [2024-12-10 05:04:18.611829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.560 qpair failed and we were unable to recover it. 00:27:27.560 [2024-12-10 05:04:18.612022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.612040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.612283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.612318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.612619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.612653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.612795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.612814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.612977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.612996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.613204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.613240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.613521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.613556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.613792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.613827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.561 [2024-12-10 05:04:18.614095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.561 [2024-12-10 05:04:18.614128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.561 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.614365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.614403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.614657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.614691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.614986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.615006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.615260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.615298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.615565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.615599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.615811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.615857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.616157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.616213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.616401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.616421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.616576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.616595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.616838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.616857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.617005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.617024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.617259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.617279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.617527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.617546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.617734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-10 05:04:18.617753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-10 05:04:18.618031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.618051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.618243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.618266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.618448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.618470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.618661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.618680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.618897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.618916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.619012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.619029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.619274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.619293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.619528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.619548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.619781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.619800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.619892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.619910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.620074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.620109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.620271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.620307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.620548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.620584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.620840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.620884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.620987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.621004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.621233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.621271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.621500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.621536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.621727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.621762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.621956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.621976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.622130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.622149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.622299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.622334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.622541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.622576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.622705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.622740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.622955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.622974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.623229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.623266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.623453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.623487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.623689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.623723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.623995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.624013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.624110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.624292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.624311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.624531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.624549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.624783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.624806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.625077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.625096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.625260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.625280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.625508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.625527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.625678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.625698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.625859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.625879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.626051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.626070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.626319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.626355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.626645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.626681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.626919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.626938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.627104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.627123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.627345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.627365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.627456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.627474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.627723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.627757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.627962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.627980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.628146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.628173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.628357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.628392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.628590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.628624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.628829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.628864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.629144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.629202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.629435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.629454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.629651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.629685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.629875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.629893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.630093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.630127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.630373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.630410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.630666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.630700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.630916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.630935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.631118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.631159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.631306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.631340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.631543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.631576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.631837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.631874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.632130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.632164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.632388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.632407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.632499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.632516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.632759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-10 05:04:18.632777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-10 05:04:18.633020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.633039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.633260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.633279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.633497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.633516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.633762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.633780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.633935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.633953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.634175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.634196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.634465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.634500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.634687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.634722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.634987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.635006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.635274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.635295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.635441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.635459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.635629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.635648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.635832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.635850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.636048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.636082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.636289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.636325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.636529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.636563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.636788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.636822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.637010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.637044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.637287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.637307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.637400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.637418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.637593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.637613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.637780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.637799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.638047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.638082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.638275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.638311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.638621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.638655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.638839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.638872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.639127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.639161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.639294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.639314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.639522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.639706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.639725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.639893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.639913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.640160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.640206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.640407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.640442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.640574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.640609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.640921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.640956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.641081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.641115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.641316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.641336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.641541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.641560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.641727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.641883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.641917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.642194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.642229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.642309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.642326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.642559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.642593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.642772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.642806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.643063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.643097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.643229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.643265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.643391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.643426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.643684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.643719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.643993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.644027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.644312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.644332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.644568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.644587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.644846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.644880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.645146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.645180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.645403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.645422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.645670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.645689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.645879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.645897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.646177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.646198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.646449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.646488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.646708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.646742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.647021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.647057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.647309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.647352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.647649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.647683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.647944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.647978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.648123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.648172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.648464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.648483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.648567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.648583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.648785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.648820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-10 05:04:18.649078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-10 05:04:18.649112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.649413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.649449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.649637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.649671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.649936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.649970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.650231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.650251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.650440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.650459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.650748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.650783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.651072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.651107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.651311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.651348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.651580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.651614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.651831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.651865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.651992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.652026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.652162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.652210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.652487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.652522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.652717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.652750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.652949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.652968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.653230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.653267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.653475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.653510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.653711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.653745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.654005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.654024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.654237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.654280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.654544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.654578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.654777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.654812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.655076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.655095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.655206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.655241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.655509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.655544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.655734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.655767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.656028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.656061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.656254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.656490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.656509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.656699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.656718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.656961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.656979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.657218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.657237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.657479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.657499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.657661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.657680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.657913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.657933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.658115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.658135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.658398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.658417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.658633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.658652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.658832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.658851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.659068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.659087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.659251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.659271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.659456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.659490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.659763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.659797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.660040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.660074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.660281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.660317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.660590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.660623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.660900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-10 05:04:18.660941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-10 05:04:18.661158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.661189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.661421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.661440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.661626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.661661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.661934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.661953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.662108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.662142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.662456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.662494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.662823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.662857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.663137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.663156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.663286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.663307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.663542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.663576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.663860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.663894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.664221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.664259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.664490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.664524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.664804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.664839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.665094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.665128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.665388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.665424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.665748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.665782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.666009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.666028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.666210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.666230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.666467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.666501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.666689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.666724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.666994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.667013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.667312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.667349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.667629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.667663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.667882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.667901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.668048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.668350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.668385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.668594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.668628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.668914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.668952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.669101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.669120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.669294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.669314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.669567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.669777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.669812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.669995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.670038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.670268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.670290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.670533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.670551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.670796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.670816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.671072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.671112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.671319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.671354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.671507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.671544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.671947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.672037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.672277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.672321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.672608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.672644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.672892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.672931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.673128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.673176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.673450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.673471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.673720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.673741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.673905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.673950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.674183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.674219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.674477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.674512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.674799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.674833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.675057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.675091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.675395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.675432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.675716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.675751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.675947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.675981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.676192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.676228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.676354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.676388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.676583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.676924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.676958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.677224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.677261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.677539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.677560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.677671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.677862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.677883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.677999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.678019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.678200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.678236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.678465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.678499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.678641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.678675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.678875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.678916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.679211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.679231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.679397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.679416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.679600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.679638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.679873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.679908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.680033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.680054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-10 05:04:18.680161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-10 05:04:18.680188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.680286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.680303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.680458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.680479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.680639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.680674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.680861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.680897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.681186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.681222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.681494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.681530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.681805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.681841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.681969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.681990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.682154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.682184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.682381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.682403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.682594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.682628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.682771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.682806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.683021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.683057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.683250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.683270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.683395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.683414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.683667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.683686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.683860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.683879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.684140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.684159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.684264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.684281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.684451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.684471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.684629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.684652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.684756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.684774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.684970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.684989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.685141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.685159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.685381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.685400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.685561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.685605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.685825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.685860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.686012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.686048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.686331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.686351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.686531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.686549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.686739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.686956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.686992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.687216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.687310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.687327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.687480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.687500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.687605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.687623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.687775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.687795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.687970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.688006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.688148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.688194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.688393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.688428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.688625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.688659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.688869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.688903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.689109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.689145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.689424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.689462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.689732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.689768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.689963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.689999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.690189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.690211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.690436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.690480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.690608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.690643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.690874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.690908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.691094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.691129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.691348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.691368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.691519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.691538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.691702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.691721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.691823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.691858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.692090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.692124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.692283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.692319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.692581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.692600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.692756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.692776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.692883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.692903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.693067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.693085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.693244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.693264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.693373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.693391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.693557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.693578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.693722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.693757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.693952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.693988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.694212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.694249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.694423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-10 05:04:18.694443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-10 05:04:18.694539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.694558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.694739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.694759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.694951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.694992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.695194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.695231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.695367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.695403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.695520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.695555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.695681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.695718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.695866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.695901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.696969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.697005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.697119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.697154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.697358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.697393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.697669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.697705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.697839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.697875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.698014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.698050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.698200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.698221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.698382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.698401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.698492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.698511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.698663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.698682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.698861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.698897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.699020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.699056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.699260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.699298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.699481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.699500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.699724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.699745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.699839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.699856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.700038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.700073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.700201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.700238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.700432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.700467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.700599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.700636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.700766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.700800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.700913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.700950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.701076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.701111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.701294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.701331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.701461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.701480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.701718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.701752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.701906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.701940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.702065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.702100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.702307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.702354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.702432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.702449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.702621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.702641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.702742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.702761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.702858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.702900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.703026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.703060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.703184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.703219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.703505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.703540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.703656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.703692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.703881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.704130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.704148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.704235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.704254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.704398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.704417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.704609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.704628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.704723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.704740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.704916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.704935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.705107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.705140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.705363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.705398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.705654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.705689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.705881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.705917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.706043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.706061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.706156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.706182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.706301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-10 05:04:18.706320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-10 05:04:18.706417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.706434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.706526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.706543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.706649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.706668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.706894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.706914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.707103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.707230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.707397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.707520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.707712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.707831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.707996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.708199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.708219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.708368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.708388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.708475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.708492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.708677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.708695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.708854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.708872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.709020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.709038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.709144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.709163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.709341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.709378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.709497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.709531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.709804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.709839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.709957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.709975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.710066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.710083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.710212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.710230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.710454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.710488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.710693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.710728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.711907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.711927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.712022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.712062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.712266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.712303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.712412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.712449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.712709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.712745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.712922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.712956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.713939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.713957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.714905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.714923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.715977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.716950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.716968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.717043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.717059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.717231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.717250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.717420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.717438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.717653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.717673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.717782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.717800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.717922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.717942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.718027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-10 05:04:18.718045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-10 05:04:18.718136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.718155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.718322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.718344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.718433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.718451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.718544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.718563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.718731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.718750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.718896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.718917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.719956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.719974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.720071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.720175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.720370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.720531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.720648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.720883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.720980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.721095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.721257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.721435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.721674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.721780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.721895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.721913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.722950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.722968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.723152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.723177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.723362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.723380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.723539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.723558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.723649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.723667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.723843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.723862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.724026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.724044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.724203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.724222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.724329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.724349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.724441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.724459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.724616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.724828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.724846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.725931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.725949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.726110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.726127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.726378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.726400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.726498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.726517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.726691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.726711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.726865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.726889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.726989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.727099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.727198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.727369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.727568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.727753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.727852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.727872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.728028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-10 05:04:18.728047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-10 05:04:18.728127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.728145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.728418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.728438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.728630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.728649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.728859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.728878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.729151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.729177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.729271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.729292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.729505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.729526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.729693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.729715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.729808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.729826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.729935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.729953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.730979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.730998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.731086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.731102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.731255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.731276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.731501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.731677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.731695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.731918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.731936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.732098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.732117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.732280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.732299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.732486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.732503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.732603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.732620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.732775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.732793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.732885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.732903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.733069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.733087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.733175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.733191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.733340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.733360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.733448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.733466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.733710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.733730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.733959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.733979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.734064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.734081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.734339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.734573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.734591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.734730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.734748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.734856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.734874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.734959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.734976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.735964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.735982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.736132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.736151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.736403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.736569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.736588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.736873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.736892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.737049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.737068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.737272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.737291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.737445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.737463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.737703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.737724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.737825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.737845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.738081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.738100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.738322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.738341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.738514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.738532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.738692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.738716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.738970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.738988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.739181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.739200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.739354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.739372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.739554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.739572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.739811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.739849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.740037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.740073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.740200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.740246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.740411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.740431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.740542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.740576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.740852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.740885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.741137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.741205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.741509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-10 05:04:18.741562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-10 05:04:18.741848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.741882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.742158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.742185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.742406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.742440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.742718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.742752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.742954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.742989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.743221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.743259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.743374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.743407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.743612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.743630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.743742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.743760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.743857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.743876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.743979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.743999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.744082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.744100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.744344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.744364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.744574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.744768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.744811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.744995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.745029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.745300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.745320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.745499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.745519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.745612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.745877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.745896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.746154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.746200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.746437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.746455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.746622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.746642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.746821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.746839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.747057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.747075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.747315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.747337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.747511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.747530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.747698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.747717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.747938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.748133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.748185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.748436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.748470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.748657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.748691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.748998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.749032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.749221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.749258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.749443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.749477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.749685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.749704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.749865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.750105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.750124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.750310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.750330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.750512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.750546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.750742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.750777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.750970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.751004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.751269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.751291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.751395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.751576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.751594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.751868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.751904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.752035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.752066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.752243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.752260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.752367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.752385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.752531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.752570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.752756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.752793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.753068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.753112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.753219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.753239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.753402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.753419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.753527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.753544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.753766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.753783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.753949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.753966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.754153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.754215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.754513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.754548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.754771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.754805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.755077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.755112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.755340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.755360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.755528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.755546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.755761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.755780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.755978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.756012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.756199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.756235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.756376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.756409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.756696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.756732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.756930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.756964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.757159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-10 05:04:18.757185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-10 05:04:18.757338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.757357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.757604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.757621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.757789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.757807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.758044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.758077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.758396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.758415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.758510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.758527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.758624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.758641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.758742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.758777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.759059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.759101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.759305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.759325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.759564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.759583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.759848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.759866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.760036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.760058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.760229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.760265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.760450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.760484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.760611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.760646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.760842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.760875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.760997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.761028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.761231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.761269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.761473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.761509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.761706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.762021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.762057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.762263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.762301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.762588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.762607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.762776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.762795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.762919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.762940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.763111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.763132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.763320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.763341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.763510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.763530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.763634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.763652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.763800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.763818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.763980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.764000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.764178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.764200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.764364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.764383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.764617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.764653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.764850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.764887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.765156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.765205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.765435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.765456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.765697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.765731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.765921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.765963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.766149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.766220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.766434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.766469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.766666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.766700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.766883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.766917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.767116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.767151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.767322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.767357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.767613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.767648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.767863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.767899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.768083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.768117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.768279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.768388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.768407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.768529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.768563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.768826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.768860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.769157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.769209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.769487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.769522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.769713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.769733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.769905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.769941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.770075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.770111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.770294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.770314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.770562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.770599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.770828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.770862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.771080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.771114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-10 05:04:18.771319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-10 05:04:18.771355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.771542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.771576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.771776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.771810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.771935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.771976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.772192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.772235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.772514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.772549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.772687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.772723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.773001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.773035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.773318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.773354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.773564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.773599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.773718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.773754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.773884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.773920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.774221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.774257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.774533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.774554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.774713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.774748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.774947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.774982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.775120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.775154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.775354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.775376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.775561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.775736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.775772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.775963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.775998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.776135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.776181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.776370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.776409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.776706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.776745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.777024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.777058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.777454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.777538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.777777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.777817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.778085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.778121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.778322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.778359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.778581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.778616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.778817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.778852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.778991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.779039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.779323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.779359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.779553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.779586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.779694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.779719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.779875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.779893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.780132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.780176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.780319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.780355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.780621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.780655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.780780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.780814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.781040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.781075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.781216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.781253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.781388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.781423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.781700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.781734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.782008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.782044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.782250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.782270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.782518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.782537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.782702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.782723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.782880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.782916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.783045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.783080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.783282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.783318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.783523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.783557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.783760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.783778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.783968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.784002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.784306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.784345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.784516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.784702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.784734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.784850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.784882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.785093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.785127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.785267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.785308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.785551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.785570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.785763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.785783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.786025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.786146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.786205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.786411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.786446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.786719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-10 05:04:18.786755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-10 05:04:18.787049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.787082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.787337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.787359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.787534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.787567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.787822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.787857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.788150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.788196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.788461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.788496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.788686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.788702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.788874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.788906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.789210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.789248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.789454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.789488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.789695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.789729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.789982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.790017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.790305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.790340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.790470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.790502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.790640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.790671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.790953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.790988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.791118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.791150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.791382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.791419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.791699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.791734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.792030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.792063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.792208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.792243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.792519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.792539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.792649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.792683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.792895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.792932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.793138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.793183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.793413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.793432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.793665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.793699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.793910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.793944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.794132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.794188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.794397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.794415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.794657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.794918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.794936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.795107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.795123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.795366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.795389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.795560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.795576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.795809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.795846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.796142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.796438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.796458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.796606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.796623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.796739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.796757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.796907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.796924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.797181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.797198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.797370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.797388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.797493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.797511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.797734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.797750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.797828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.797845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.798104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.798138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.798385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.798420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.798641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.798676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.798870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.798906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.799116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.799150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.799352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.799370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.799547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.799582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.799782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.799817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.800092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.800128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.800264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.800284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.800467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.800485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.800591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.800623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.800744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.800780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.801058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.801092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.801351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.801374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.801564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.801582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.801741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.801761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.801944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.801979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.802242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-10 05:04:18.802279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-10 05:04:18.802529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.802548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.802711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.802729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.802948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.802981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.803188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.803224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.803501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.803539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.803820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.803853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.804152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.804196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.804342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.804378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.804600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.804618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.804885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.804903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.805154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.805205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.805453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.805587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.805621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.805748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.805781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.805985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.806020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.806233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.806254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.806414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.806432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.806695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.806714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.806982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.807015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.807313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.807351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.807495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.807530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.807784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.807820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.808012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.808047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.808325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.808362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.808666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.808700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.808894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.808929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.809126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.809159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.809427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.809462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.809754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.809772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.809866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.809886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.810085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.810103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.810207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.810228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.810376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.810454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.810705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.810744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.810938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.810961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.811133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.811153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.811421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.811441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.811541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.811559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.811651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.811670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.811853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.811890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.812057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.812291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.812470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.812589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.812693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.812793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.812968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.813006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.813272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.813308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.813426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.813649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.813670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.813769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.813788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.814021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.814041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.814196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.814217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.814383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.814419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.814531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.814567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.814692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.814725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.814914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.814947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.815072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.815105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.815305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.815343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.815468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.815503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.815706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.815739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.815923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.815957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.816183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.816219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.816411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.816453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.816628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.816647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.816870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.816905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.817136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.817184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.817406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.817440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-10 05:04:18.817624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-10 05:04:18.817658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.817887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.817924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.818045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.818079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.818286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.818323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.818578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.818613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.818881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.818921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.819195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.819467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.819506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.819689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.819726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.819943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.819979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.820189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.820225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.820410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.820446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.820645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.820680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.820816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.820834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.821017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.821037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.821257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.821278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.821379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.821398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.821577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.821611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.821740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.821774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.821913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.821954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.822079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.822112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.822361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.822492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.822533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.822744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.822779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.823001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.823037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.823291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.823330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.823519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.823553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.823759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.823792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.824047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.824081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.824204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.824240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.824425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.824466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.824588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.824623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.824744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.824789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.824944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.824964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.825044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.825062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.825221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.825243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.825336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.825354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.825540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.825581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.825769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.825805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.825992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.826027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.826231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.826266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.826476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.826513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.826651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.826670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.826856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.826875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.826969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.826988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.827153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.827183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.827341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.827364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.827536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.827555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.827637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.827656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.827838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.827861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.828007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.828027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.828119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.828163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.828394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.828433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.828548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.828581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.828712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.828731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.828912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.828933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.829030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.829048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.829298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.829336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.829519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.829553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.829810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.829829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.829996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.830014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.830194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.830213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.830377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.830397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.830561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.830580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.830735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.830754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.830912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.830952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.831139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.831195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.831459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.831495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.831802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.831836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.832053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.832087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.832291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.832329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.832553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.832588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.832721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.832739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.832857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.832875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.832973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.832992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.833067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.833087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-10 05:04:18.833215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-10 05:04:18.833237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.833400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.833418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.833540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.833573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.833703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.833738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.833990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.834023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.834228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.834263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.834410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.834446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.834568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.834585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.834731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.834750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.834866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.834884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.834991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.835011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.835272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.835310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.835427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.835461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.835605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.835639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.835835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.835871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.836055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.836088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.836312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.836349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.836599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.836617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.836801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.836837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.837016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.837050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.837302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.837322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.837499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.837520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.837703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.837736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.837871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.837904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.838042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.838078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.838400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.838520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.838554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.838747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.838782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.839008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.839044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.839237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.839274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.839472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.839507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.839792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.839827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.839990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.840125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.840160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.840367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.840403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.840577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.840835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.840854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.840970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.840988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.841141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.841161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.841257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.841282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.841366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.841385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.841652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.841691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.841937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.841973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.842102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.842136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.842274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.842309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.842561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.842596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.842730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.842767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.843045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.843078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.843245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.843282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.843532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.843567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.843703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.843721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.843968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.843987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.844183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.844201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.844417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.844436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.844582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.844599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.844760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.844796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.844933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.844966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.845097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.845137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.845350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.845385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.845596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.845629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.845772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.845807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.845931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-10 05:04:18.845964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-10 05:04:18.846160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.846227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.846472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.846490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.846651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.846686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.846961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.846995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.847111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.847144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.847432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.847475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.847659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.847681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.847789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.847808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.847944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.847979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.848226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.848262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.848446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.848480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.848672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.848707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.848858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.848891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.849191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.849228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.849436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.849470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.849746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.849779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.849960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.849984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.850132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.850149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.850351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.850370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.850592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.850609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.850733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.850917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.850951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.851090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.851124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.851423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.851458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.851588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.851622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.851811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.851831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.851905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.851948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.852070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.852103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.852297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.852333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.852556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.852575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.852657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.852675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.852826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.852844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.852924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.852941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.853104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.853210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.853229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.853373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.853391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.853505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.853538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.853806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.853842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.853958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.853993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.854188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.854222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.854417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.854452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.854641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.854676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.854821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.854840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.854988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.855789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.855987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.856020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.856131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.856181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.856387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.856421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.856606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.856648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.856786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.856820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.856958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.857244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.857281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.857419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.857454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.857651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.857685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.857879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.857922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.858044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.858078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.858267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.858303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.858420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.858455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.858728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.858761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.858975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.858993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.859228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.859264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.859399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.859418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.859563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.859581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.859733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.859767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.859895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.859928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.860060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.860094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.860362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.860397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.860596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.860629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.860838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.860872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.861071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.861109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.861317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.861352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.861560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.861594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.861715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.861748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.861843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.861861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.861968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.861987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.862112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.862228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.862402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.862588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.862701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.862807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.862969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.863007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.863153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.863424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.863460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.863651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.863671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.863752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.863769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.864017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.864051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.864139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-10 05:04:18.864157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-10 05:04:18.864276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.864295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.864506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.864524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.864669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.864686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.864863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.864898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.865027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.865060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.865256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.865292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.865540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.865582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.865752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.865769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.865947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.865971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.866130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.866302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.866475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.866581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.866747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.866840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.866996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.867939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.867956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.868045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.868063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.868174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.868192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.868350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.868385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.868511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.868544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.868721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.868756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.868929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.868962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.869089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.869123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.869330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.869365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.869635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.869654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.869725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.869743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.869907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.869924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.870009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.870031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.870227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.870262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.870438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.870455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.870547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.870572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.870750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.870783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.871058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.871092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.871223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.871259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.871389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.871424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.871718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.871893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.871925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.872106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.872141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.872258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.872293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.872473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.872507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.872634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.872670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.872808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.872989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.873007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.873171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.873190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.873356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.873390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.873582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.873617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.873722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.873756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.873935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.873970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.874176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.874211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.874458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.874492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.874666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.874700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.874878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.874912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.875118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.875153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.875307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.875341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.875614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.875647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.875841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.875860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.876033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.876068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.876312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.876348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.876527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.876560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.876684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.876703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.876784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.876800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.877123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.877216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.877367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.877404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.877589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.877609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.877798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.877831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.877950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.877984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.878161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.878339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.878372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.878787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.879025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.879063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.879271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.879310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.879486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.879507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.879688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.879707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.879872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.879915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.880128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.880161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.880376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.880411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.880657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.880691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.880858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.880876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.881043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.881076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.881264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.881301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.881548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.881581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.881705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.881722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-10 05:04:18.881831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-10 05:04:18.881849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.881928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.881947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.882137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.882180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.882426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.882460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.882660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.882695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.882792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.882809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.882890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.882907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.882992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.883205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.883370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.883536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.883706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.883823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.883920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.883937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.884045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.884062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.884158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.884184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.884309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.884499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.884532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.884743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.884774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.884951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.884984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.885107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.885139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.885359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.885393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.885568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.885602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.885792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.885827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.886093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.886127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.886257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.886302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.886445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.886479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.886673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.886708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.886831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.886848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.886943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.886961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.887124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.887157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.887381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.887415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.887534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.887567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.887763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.887781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.887953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.887972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.888197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.888215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.888291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.888308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.888479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.888496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.888648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.888667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.888776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.888794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.889023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.889062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.889312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.889347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.889603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.889620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.889776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.889797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.889887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.889904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.890054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.890072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.890162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.890222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.890358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.890393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.890522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.890555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.890678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.890716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.890836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.890859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.891015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.891054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.891300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.891335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.891524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.891559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.891730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.891749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.891841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.891859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.892014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.892032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.892198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.892232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.892498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.892532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.892658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.892703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.892922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.892940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.893032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.893048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.893142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.893160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.893336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.893353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.893508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.893526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.893612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.893629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.893773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.893790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.894807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.894846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.895029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.895061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.895248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.895285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.895470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.895503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.895671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.895762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.895780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.895926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.895944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.896016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.896031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.896190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.896208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.896366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.896384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.896544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.896582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.896690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.896724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.896896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.896934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.897049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.897282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.897317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.897425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.897458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.897712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.897859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.897891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.898065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.898097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.898341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.898376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.898643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.898675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-10 05:04:18.898866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-10 05:04:18.898899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.899113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.899145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.899344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.899377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.899503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.899520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.899593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.899610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.899867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.899901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.900139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.900186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.900372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.900404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.900652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.900669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.900830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.900856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.901042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.901075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.901327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.901363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.901483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.901516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.901728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.901762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.901945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.901962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.902108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.902125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.902264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.902282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.902439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.902456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.902563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.902595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.902802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.902835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.902975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.903008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.903247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.903282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.903457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.903490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.903685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.903845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.903890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.903998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.904015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.904195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.904213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.904368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.904385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.904480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.904498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.904674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.904691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.904843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.904868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.904973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.905006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.905188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.905223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.905399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.905433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.905567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.905599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.905778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.905810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.905941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.905973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.906182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.906217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.906479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.906512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.906615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.906646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.906915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.906948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.907219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.907261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.907446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.907478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.907719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.907752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.907884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.907917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.908102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.908135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.908385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.908420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.908594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.908627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.908814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.908846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.909003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.909020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.909163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.909187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.909421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.909439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.909523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.909540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.909682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.909713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.909842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.909874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.910067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.910100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.910295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.910328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.910591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.910625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.910864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.910897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.911024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.911057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.911246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.911282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.911392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.911424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.911689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.911722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.911899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.911916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.912072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.912089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.912256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.912274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.912558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.912575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.912672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.912689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.912834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.912854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.913138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.913180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.913421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.913455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.913585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.913628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.913865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.913882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.914957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.914973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.915051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.915093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.915272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.915307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.915539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.915571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.915833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.915851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.915938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-10 05:04:18.916100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-10 05:04:18.916133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.916344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.916378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.916555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.916589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.916759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.916791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.916980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.917012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.917300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.917336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.917505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.917522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.917666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.917684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.917779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.917795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.917939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.917959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.918111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.918128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.918213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.918230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.918410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.918428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.918505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.918548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.918661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.918695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.918874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.918908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.919062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.919237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.919273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.919469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.919502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.919688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.919706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.919844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.919887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.920009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.920042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.920282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.920317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.920577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.920652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.920812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.920849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.920970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.921005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.921159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.921186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.921282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.921299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.921538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.921568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.921756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.921788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.921998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.922031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.922220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.922254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.922470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.922505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.922678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.922711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.922935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.922970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.923095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.923128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.923317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.923352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.923467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.923500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.923632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.923650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.923854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.923871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.924084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.924119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.924323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.924357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.924473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.924508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.924701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.924718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.924964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.924998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.925281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.925316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.925510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.925543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.925713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.925746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.925987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.926019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.926223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.926259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.926397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.926437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.926582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.926615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.926862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.926896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.927070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.927109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.927231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.927265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.927377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.927410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.927538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.927573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.927849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.927885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.928073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.928090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.928206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.928396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.928415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.928568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.928602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.928830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.928864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.929151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.929200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.929454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.929487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.929595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.929628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.929893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.929929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.930052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.930085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.930325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.930360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.930552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.930586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.930825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.930857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.930978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.931012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.931215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.931250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.931432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.931473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.931568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.931584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.931664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.931680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.931861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.931895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.932021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.932062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.932280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.932314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.932591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.932625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.932823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.933093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.933110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.933268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.933286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.933431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.933449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.933652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.933670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.933755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.933801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.933974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.934008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.934189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.934224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.934412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.934445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.934673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.934708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.934942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.934958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.935150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.935404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.935439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.935642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.935676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.935824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.935858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.935970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-10 05:04:18.936003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-10 05:04:18.936200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.936235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.936348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.936380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.936642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.936676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.936948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.936980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.937238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.937273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.937485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.937519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.937772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.937790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.937930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.937949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.938106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.938145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.938381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.938414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.938592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.938627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.938920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.939097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.939116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.939274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.939293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.939516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.939534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.939705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.939723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.939947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.939965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.940164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.940189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.940356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.940374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.940521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.940540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.940760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.940794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.941064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.941098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.941329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.941366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.941489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.941521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.941631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.941672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.941901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.941919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.942127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.942144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.942310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.942328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.942488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.942522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.942700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.942734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.942838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.942872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.943062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.943096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.943267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.943303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.943451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.943485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.943687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.943720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.943888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.943909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.943991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.944008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.944159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.944188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.944333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.944368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.944482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.944517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.944715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.944750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.944989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.945129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.945147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.945362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.945409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.945692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.945724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.945856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.945889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.946051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.946068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.946153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.946178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.946384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.946401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.946543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.946561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.946705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.946744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.946877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.946910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.947104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.947137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.947327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.947362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.947497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.947529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.947714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.947748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.947857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.947900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.948893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.948988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.949022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.949148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.949193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.949372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.949408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.949528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.949562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.949755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.949789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.949908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.949941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.950204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.950240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.950431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.950466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.950705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.950739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.950898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.950917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.950998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.951015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.951223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.951241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.951314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.951331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.951421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.951439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.951599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.951633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.951804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.951838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.952029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.952064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.952214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.952249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.952365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.952399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.952639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.952678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.952829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.952846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.953087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.953105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.953192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.953209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.953350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.953368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.953518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.953537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.953680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-10 05:04:18.953698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-10 05:04:18.953806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.953838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.954024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.954057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.954188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.954223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.954340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.954374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.954490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.954524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.954644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.954677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.954888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.954922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.955123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.955344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.955378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.955510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.955544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.955789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.955822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.956063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.956080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.956247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.956268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.956415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.956459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.956727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.956760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.957030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.957063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.957200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.957235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.957366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.957400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.957589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.957623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.957799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.957834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.957972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.958005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.958204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.958239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.958444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.958480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.958649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.958667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.958913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.958946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.959062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.959094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.959288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.959323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.959446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.959479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.959666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.959699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.959882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.959915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.960095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.960113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.960288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.960334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.960512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.960545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.960681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.960714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.960832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.960867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.961058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.961254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.961433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.961532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.961683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.961845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.961973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.962007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.962118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.962150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.962353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.962387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.962573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.962607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.962779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.962814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.962986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.963020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.963137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.963193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.963371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.963405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.963646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.963679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.963858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.963891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.964886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.964919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.965091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.965123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.965251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.965287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.965420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.965453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.965580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.965614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.965797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.965918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.965935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.966952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.966969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.967055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.967073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.967157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.967181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.967259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.967275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.967530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.967564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.967738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.967757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.967915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.967948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.968063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.968096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.968315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.968350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.968530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.968562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.968745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.968779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.968885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.968919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.969056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.969164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.969254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.969351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.969471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-10 05:04:18.969694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-10 05:04:18.969805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.969839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.970978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.970994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.971938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.971955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.972960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.972977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.973154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.973200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.973315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.973347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.973619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.973661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.973733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.973750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.973906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.973923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.974012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.974044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.974239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.974275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.974465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.974499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.974603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.974636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.974750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.974768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.974965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.974999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.975275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.975310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.975481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.975515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.975643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.975693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.975902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.975936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.976213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.976247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.976474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.976508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.976751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.976783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.976967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.977001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.977114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.977159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.977254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.977272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.977419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.977438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.977509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.977525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.977623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-10 05:04:18.977643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-10 05:04:18.977806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.977824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.977976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.977992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.978953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.979099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.979133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.979315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.979349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.979469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.979503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.979643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.979677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.979870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.979904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.980040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.980075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.980250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.980285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.980462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.980495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.980632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.980665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.980857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.980875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.981937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.981981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.982180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.982216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.982399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.982432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.982669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.982703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.982875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.982908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.983976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.983993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.984137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.984154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.984323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.984341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.984574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.984607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-10 05:04:18.984711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-10 05:04:18.984743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.984874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.984907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.985020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.985053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.985160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.985234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.985484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.985516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.985629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.985667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.985836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.985853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.986017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.986051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.986234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.986270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.986391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.986424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.986616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.986648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.986752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.986784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.986921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.986954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.987188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.987207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.987307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.987325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.987484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.987501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.987644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.987661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.987735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.987751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.987958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.987975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.988859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.988893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.989000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.989033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.989227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.989261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.989369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.989403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.989541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.989573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.989741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.989758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.989911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.989944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.990126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.990158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.990342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.990375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.990610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.990642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.990765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.990798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.990986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.991137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.991284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.991534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.991629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.991791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.991936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-10 05:04:18.991968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-10 05:04:18.992222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.992256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.992384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.992416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.992519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.992551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.992727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.992761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.992872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.992889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.992958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.992973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.993124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.993156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.993290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.993331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.993460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.993492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.993613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.993652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.993769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.993802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.993908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.993941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.994112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.994154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.994248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.994264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.994331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.994347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.994492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.994702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.994735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.994860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.994893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.995020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.995037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.995189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.995208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.995290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.995306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.995511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.995528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.995669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.995687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.995831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.995851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.996965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.996998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.997125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.997158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.997342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.997376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.997545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.997578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.997698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.997715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.997943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.997976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.998082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.998121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.998242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.998277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.998390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.998424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-10 05:04:18.998553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-10 05:04:18.998586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.998824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.998857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:18.999947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:18.999980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.000102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.000370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.000389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.000556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.000700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.000718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.000814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.000830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.000968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.000985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.001069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.001085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.001244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.001263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.001343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.001358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.001505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.001547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.001733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.001766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.001960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.001992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.002904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.002921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.003912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.003928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.004027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.004199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.004387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.004490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.004661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.004830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.004982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.005015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.005118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.005150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.005333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.005367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.005536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-10 05:04:19.005569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-10 05:04:19.005745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.005763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.005839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.005855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.005992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.006009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.006105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.006145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.006330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.006363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.006476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.006509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.006762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.006795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.006982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.007014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.007135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.007181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.007297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.007329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.007498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.007531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.007701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.007734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.007845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.007862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.008091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.008124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.008305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.008339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.008528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.008560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.008732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.008766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.008959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.008992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.009123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.009140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.009239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.009347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.009362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.009503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.009520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.009730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.009763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.009955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.009988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.010227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.010263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.010390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.010422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.010548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.010580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.010697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.010730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.010922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.010955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.011145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.011188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.011362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.011396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.011498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.011532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.011704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.011738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.011854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.011888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.012074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.012092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.012334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.012367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.012485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.012518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.012710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.012743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.012934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.012952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.013137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.013180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.013373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.013405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.013643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.013675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.013802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.013834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.014085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.014121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.014259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.014276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.014426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-10 05:04:19.014443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-10 05:04:19.014598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.014618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.014773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.014790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.014876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.014891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.014973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.014991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.015155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.015198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.015319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.015353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.015539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.015572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.015813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.015846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.015963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.015997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.016125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.016158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.016356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.016390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.016499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.016531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.016644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.016678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.016911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.016944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.017148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.017171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.017268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.017286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.017442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.017459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.017539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.017579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.017820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.017853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.017975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.018008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.018178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.018198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.018278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.018294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.018449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.018467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.018690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.018723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.018841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.018874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.019001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.019033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.019217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.019236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.019373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.019493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.019510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.019649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.019666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.019917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.019934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.020037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.020292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.020392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.020496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.020698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.020851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.020968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.021000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.021126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.021159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.021353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.021386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.021566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.021598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.021729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.021762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.021886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.021918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.022040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.022074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.022251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.022269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.022357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-10 05:04:19.022373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-10 05:04:19.022519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.022537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.022707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.022740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.022931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.022964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.023068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.023100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.023317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.023353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.023503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.023758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.023791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.023972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.024900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.024988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.025074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.025235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.025344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.025433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.025684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.025891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.025925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.026108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.026140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.026337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.026356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.026461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.026494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.026615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.026647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.026830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.026863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.027138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.027186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.027358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.027392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.027523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.027557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.027689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.027734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.027813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.027829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.027986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.028024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.028216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.028252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.028431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.028465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.028599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.028632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.028757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.028790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.028973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.028991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-10 05:04:19.029102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-10 05:04:19.029135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.029389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.029424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.029558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.029593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.029744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.029777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.029886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.029920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.030103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.030121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.030204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.030220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.030373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.030390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.030486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.030504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.030581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.030597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.258 qpair failed and we were unable to recover it. 00:27:28.258 [2024-12-10 05:04:19.030753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.258 [2024-12-10 05:04:19.030786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.030914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.030946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.031052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.031091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.031273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.031307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.031415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.031449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.031635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.031784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.031817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.031951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.031985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.032160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.032186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.032261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.032277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.032384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.032459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.032475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.032648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.032681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.032888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.032921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.033088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.033222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.033395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.033621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.033723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.033880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.033970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.034001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.034181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.034216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.034333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.034366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.034623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.034656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.034781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.034815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.034987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.035139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.035259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.035415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.035804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.035958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.035990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.036111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.036128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.036336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.036355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.036442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.036459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.036548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.036565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.036769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.036787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.036888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.036905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.037050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.037067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-12-10 05:04:19.037216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-12-10 05:04:19.037252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.037423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.037456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.037633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.037665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.037860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.037892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.038060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.038078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.038223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.038241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.038468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.038502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.038682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.038715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.038890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.038923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.039129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.039147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.039255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.039272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.039453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.039471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.039561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.039577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.039718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.039736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.039882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.039899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.040056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.040089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.040207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.040242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.040414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.040452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.040580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.040612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.040739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.040772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.040963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.040997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.041969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.041984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.042064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.042080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.042180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.042215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.042317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.042350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.042521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.042595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.042736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.042779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.042911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.042944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.043061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.043165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.043595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.043769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-12-10 05:04:19.043879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-12-10 05:04:19.043964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.043981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.044155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.044210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.044314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.044537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.044571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.044808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.044841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.045085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.045118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.045329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.045364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.045535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.045568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.045794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.045827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.046093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.046127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.046282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.046299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.046506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.046523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.046603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.046618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.046766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.046783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.046857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.046872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.047069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.047101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.047285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.047320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.047445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.047478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.047594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.047633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.047826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.047861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.048103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.048135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.048257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.048292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.048535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.048568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.048672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.048705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.048818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.048850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.049112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.049144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.049416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.049450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.049541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.049561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.049646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.049663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.049820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.049854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.049991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.050024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.050193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.050229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.050472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.050505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.050686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.050717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.050960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.051189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.051224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.051344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.051377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.051499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.051532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.051716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.051749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-12-10 05:04:19.051931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-12-10 05:04:19.051964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.052089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.052130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.052358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.052376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.052538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.052555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.052722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.052756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.052953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.052986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.053106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.053141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.053359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.053393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.053576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.053609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.053718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.053739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.053815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.053831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.053915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.053931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.054747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.054763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.055001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.055034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.055151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.055196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.055384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.055417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.055606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.055640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.055755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.055788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.055963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.055996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.056977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.056994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.057148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.057286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.057407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.057441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.057542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.057575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.057833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.057865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.058041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.058075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.058192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.058226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-12-10 05:04:19.058342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-12-10 05:04:19.058375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.058545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.058579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.058749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.058781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.058908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.058941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.059064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.059098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.059351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.059386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.059520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.059553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.059688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.059721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.059838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.059871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.059996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.060030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.060261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.060295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.060494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.060511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.060666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.060682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.060820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.060838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.060975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.060995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.061155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.061200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.061375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.061407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.061533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.061567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.061760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.061794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.062035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.062069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.062306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.062341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.062467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.062507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.062631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.062664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.062799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.062832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.063011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.063029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.063182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.063217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.063333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.063367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.063498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.063530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.063740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.063773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.064008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.064040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.064210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.064230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.064379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.064412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.064545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.064577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.064766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-12-10 05:04:19.064799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-12-10 05:04:19.064989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.065006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.065109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.065126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.065332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.065351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.065529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.065546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.065627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.065670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.065791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.065824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.066035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.066068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.066257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.066292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.066407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.066440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.066619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.066652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.066912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.066946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.067062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.067095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.067275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.067293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.067440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.067458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.067624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.067643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.067907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.067940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.068115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.068148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.068356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.068390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.068579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.068611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.068794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.068827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.068963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.068997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.069162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.069338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.069356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.069520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.069538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.069700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.069717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.069810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.069827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.069908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.069924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.070065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.070082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.070188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.070205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.070384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.070401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.070639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.070673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.070859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.070891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.071013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.071047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.071291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.071310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.071400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.071418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.071602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.071635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.071820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.071853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.071956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.072096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-12-10 05:04:19.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-12-10 05:04:19.072311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.072330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.072417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.072432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.072577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.072594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.072749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.072782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.073928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.073944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.074897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.074994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.075026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.075223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.075259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.075454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.075488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.075607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.075641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.075829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.075864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.075986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.076019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.076213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.076247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.076441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.076475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.076646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.076679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.076890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.076924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.077103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.077135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.077309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.077327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.077467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.077484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.077651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.077694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.077886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.077920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.078057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.078089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.078267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.078285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.078375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.078393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.078596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.078614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.078702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.078720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-12-10 05:04:19.078818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-12-10 05:04:19.078836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.079930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.079963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.080150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.080211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.080323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.080357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.080560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.080593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.080712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.080746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.080878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.080913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.081073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.081251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.081287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.081461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.081494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.081672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.081705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.081879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.081912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.082162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.082188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.082405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.082423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.082571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.082607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.082790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.082823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.082950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.082984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.083164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.083209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.083379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.083396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.083490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.083505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.083611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.083628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.083784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.083823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.083999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.084032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.084248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.084284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.084494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.084526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.084646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.084685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.084892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.084934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.085106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.085214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.085434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.085611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.085721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.085839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.085994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-12-10 05:04:19.086012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-12-10 05:04:19.086177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.086195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.086276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.086293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.086501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.086534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.086773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.086806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.086926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.086960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.087087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.087193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.087346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.087458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.087542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.087810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.087994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.088028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.088209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.088244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.088362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.088395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.088675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.088708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.088875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.088908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.089147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.089188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.089444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.089462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.089691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.089708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.089853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.089870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.090073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.090089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.090257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.090291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.090532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.090564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.090749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.090783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.091038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.091071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.091278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.091295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.091451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.091468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.091551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.091567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.091709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.091727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.091879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.091896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.092062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.092095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.092269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.092304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.092545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.092619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.092768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.092804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.092989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-12-10 05:04:19.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-12-10 05:04:19.093140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.093193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.093364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.093396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.093608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.093643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.093821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.093859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.093980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.094013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.094206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.094240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.094447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.094479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.094667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.094699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.094872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.094904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.095196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.095215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.095370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.095388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.095537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.095554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.095709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.095726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.095864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.095881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.095975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.095991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.096079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.096094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.096252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.096270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.096475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.096491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.096636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.096654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.096807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.096849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.097039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.097072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.097202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.097237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.097413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.097452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.097620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.097637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.097744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.097762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.097834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.097850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.098057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.098089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.098261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.098295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.098531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.098564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.098745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.098779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.098894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.098928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.099121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.099153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.099403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.099436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.099551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.099584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.099820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.099853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.100100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.100133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.100311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.100385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.100602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.100640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.100895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-12-10 05:04:19.100929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-12-10 05:04:19.101149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.101193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.101434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.101467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.101659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.101691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.101894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.101927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.102136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.102178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.102390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.102424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.102553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.102589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.102773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.102805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.103046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.103079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.103248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.103424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.103441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.103584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.103625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.103752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.103787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.103920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.103952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.104124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.104156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.104343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.104364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.104463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.104480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.104653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.104670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.104878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.104896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.105146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.105164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.105259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.105275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.105411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.105428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.105579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.105596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.105711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.105743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.105868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.105901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.106074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.106107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.106379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.106398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.106499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.106545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.106745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.106778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.106978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.107011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.107197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.107234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.107412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.107443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.107568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.107602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.107757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.107888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.107920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.108049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.108082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.108257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.108292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.108534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.108567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.108756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.108789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-12-10 05:04:19.108977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-12-10 05:04:19.109014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.109153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.109194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.109433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.109466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.109577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.109599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.109777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.109809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.109928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.109961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.110118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.110136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.110361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.110397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.110576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.110610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.110779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.110812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.110977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.110994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.111056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.111072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.111227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.111262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.111450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.111483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.111673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.111706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.111876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.111909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.112113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.112130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.112229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.112246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.112343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.112359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.112572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.112605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.112796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.113003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.113036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.113173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.113190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.113285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.113301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.113462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.113496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.113664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.113697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.113801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.113835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.114015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.114055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.114243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.114277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.114487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.114520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.114651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.114958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.114999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.115821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.115854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-12-10 05:04:19.116095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-12-10 05:04:19.116128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.116326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.116361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.116543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.116576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.116701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.116733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.116853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.116885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.117008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.117025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.117187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.117261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.117412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.117450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.117609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.117629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.117768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.117786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.117930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.117947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.118083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.118100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.118261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.118279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.118415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.118432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.118516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.118532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.118689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.118709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.118790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.118833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.119013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.119046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.119260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.119295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.119488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.119506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.119736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.119769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.120008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.120042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.120288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.120323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.120456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.120489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.120618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.120650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.120821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.120853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.121111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.121143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.121341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.121359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.121507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.121539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.121720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.121753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.121942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.121975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.122239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.122257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.122360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.122378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.122532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.122549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.122707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.122724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.122828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.122861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.122982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.123014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.123210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.123453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.123487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.123624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-12-10 05:04:19.123657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-12-10 05:04:19.123829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.123862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.123982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.124000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.124151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.124225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.124344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.124377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.124639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.124673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.124854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.124886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.125088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.125105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.125177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.125194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.125422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.125439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.125598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.125615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.125850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.125883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.126073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.126090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.126244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.126262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.126339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.126355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.126610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.126642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.126899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.126932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.127125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.127144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.127357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.127376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.127567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.127599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.127859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.127892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.128921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.128954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.129140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.129182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.129364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.129380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.129588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.129795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.129829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.130020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.130054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.130231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.130265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.130479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.130513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.130743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-12-10 05:04:19.130776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-12-10 05:04:19.130975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.131008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.131197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.131231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.131485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.131518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.131722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.131755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.131925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.131957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.132231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.132471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.132504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.132693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.132725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.132838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.132873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.132977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.133010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.133247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.133281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.133515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.133532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.133739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.133756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.133841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.133856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.134072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.134105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.134296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.134329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.134458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.134490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.134723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.134756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.134872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.134905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.135035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.135068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.135273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.135309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.135496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.135514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.135746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.135763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.135850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.135865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.135962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.135979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.136118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.136135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.136225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.136241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.136418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.136451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.136634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.136666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.136782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.136814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.136991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.137023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.137218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.137250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.137423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.137468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.137632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.137650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.137800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.137817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.138001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.138040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.138224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.138258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.138435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.138469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.138647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-12-10 05:04:19.138680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-12-10 05:04:19.138949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.138981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.139115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.139160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.139371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.139388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.139545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.139578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.139703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.139736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.139855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.139887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.140063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.140096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.140269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.140305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.140538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.140555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.140642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.140658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.140934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.140967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.141879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.141990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.142021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.142139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.142181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.142373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.142406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.142580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.142612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.142852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.142884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.142998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.143036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.143240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.143274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.143393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.143426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.143664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.143680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.143762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.143778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.143913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.143930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.143999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.144015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.144092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.144108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.144327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.144346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.144507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.144539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.144800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.144832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.145019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.145052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.145176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.145210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.145401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.145434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.145630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.145921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.145954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-12-10 05:04:19.146121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-12-10 05:04:19.146139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.146229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.146245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.146423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.146562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.146595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.146832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.146865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.147069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.147102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.147279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.147314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.147574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.147607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.147799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.147832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.148045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.148077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.148320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.148338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.148431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.148447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.148699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.148716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.148932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.148949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.149117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.149261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.149422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.149439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.149547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.149579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.149780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.149814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.149930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.149963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.150134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.150151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.150276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.150294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.150496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.150514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.150666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.150889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.150923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.151185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.151221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.151389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.151422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.151684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.151717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.151891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.151924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.152121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.152153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.152366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.152384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.152536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.152573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.152703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.152737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.152923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.152956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.153080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.153113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.153296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.153314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.153452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.153469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.153557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.153573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.153727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.153761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-12-10 05:04:19.153963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-12-10 05:04:19.153996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.154104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.154137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.154263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.154279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.154432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.154449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.154672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.154689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.154774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.154790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.154976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.155010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.155139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.155182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.155372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.155405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.155578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.155611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.155787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.155819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.155935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.155968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.156144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.156197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.156444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.156483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.156669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.156702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.156939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.156971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.157090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.157108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.157323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.157359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.157597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.157630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.157758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.157791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.157905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.157938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.158144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.158185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.158369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.158404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.158602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.158620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.158713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.158728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.158956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.158973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.159131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.159149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.159365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.159382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.159630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.159662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.159877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.159910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.160121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.160153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.160369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.160402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.160529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.160562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.160878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.161050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.161082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-12-10 05:04:19.161336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-12-10 05:04:19.161355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.161499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.161531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.161655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.161687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.161858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.161891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.162064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.162097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.162273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.162313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.162429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.162461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.162703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.162736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.162913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.162946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.163118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.163152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.163334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.163367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.163494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.163526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.163701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.163733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.163917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.163951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.164162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.164214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.164297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.164313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.164519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.164552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.164754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.164787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.164904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.164938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.165133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.165151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.165310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.165329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.165549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.165581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.165786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.165819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.166009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.166042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.166225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.166260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.166395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.166432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.166609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.166627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.166777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.166811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.167071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.167104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.167279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.167298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.167532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.167566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.167855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.167888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.168027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.168066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.168262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.168280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.168450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.168483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.168671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.168704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.168887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.168921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.169043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.169076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.169248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.169522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-12-10 05:04:19.169554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-12-10 05:04:19.169727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.169760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.169954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.169987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.170126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.170144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.170407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.170442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.170705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.170737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.170843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.170876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.171054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.171087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.171273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.171308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.171543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.171576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.171819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.171853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.172059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.172092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.172366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.172452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.172467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.172606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.172624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.172775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.172807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.172927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.172962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.173084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.173117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.173324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.173358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.173474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.173506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.173635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.173667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.173869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.173903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.174148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.174191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.174369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.174401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.174572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.174590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.174686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.174704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.174794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.174810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.174981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.174999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.175140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.175158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.175238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.175255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.175334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.175350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.175426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.175442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.175583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.175615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.175834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.175867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.176005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.176040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.176154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.176188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.176392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.176411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.176559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.176577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.176802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.176820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.176899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.176915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-12-10 05:04:19.177052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-12-10 05:04:19.177069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.177222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.177257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.177499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.177533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.177807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.177841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.177954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.177989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.178114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.178157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.178264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.178281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.178361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.178378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.178602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.178619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.178687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.178703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.178839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.178858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.179005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.179022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.179179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.179197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.179288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.179305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.179446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.179480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.179681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.179714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.179887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.179920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.180038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.180072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.180268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.180303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.180478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.180511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.180680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.180697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.180845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.180886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.181059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.181091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.181282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.181317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.181584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.181617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.181740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.181773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.182922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.182955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.183069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.183212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.183246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.183515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.183548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.183653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.183686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.183863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.183896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.184069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.184104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.184226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-12-10 05:04:19.184260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-12-10 05:04:19.184449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.184629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.184663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.184867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.184901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.185070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.185102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.185239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.185259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.185407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.185423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.185523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.185541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.185723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.185757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.185868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.185908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.186180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.186216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.186344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.186377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.186449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.186466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.186605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.186622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.186780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.186799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.187021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.187038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.187194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.187230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.187401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.187435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.187558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.187590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.187826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.187844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.187915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.187931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.188071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.188088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.188221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.188239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.188380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.188399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.188536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.188568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.188808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.188842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.189123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.189157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.189310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.189342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.189468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.189484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.189624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.189640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.189879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.189982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.190024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.190145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.190189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.190320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.190353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.190550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.190584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.190684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.190703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.190886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.190924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.191056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.191089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.191256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.191292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-12-10 05:04:19.191412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-12-10 05:04:19.191446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.191562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.191579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.191810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.191843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.191957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.191991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.192102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.192136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.192340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.192376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.192624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.192641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.192842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.192861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.192949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.192966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.193897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.193931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.194062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.194096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.194287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.194305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.194454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.194471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.194559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.194576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.194718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.194735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.194894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.194928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.195101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.195134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.195327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.195362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.195532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.195567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.195829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.195863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.195986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.196018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.196264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.196299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.196429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.196462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.196635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.196652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.196844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.196947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.196981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.197221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.197259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-12-10 05:04:19.197432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-12-10 05:04:19.197465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.197600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.197618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.197755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.197773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.197842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.197858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.197991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.198008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.198200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.198222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.198370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.198387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.198556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.198589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.198715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.198748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.198872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.198905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.199078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.199234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.199405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.199604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.199703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.199805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.199979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.200019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.200284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.200319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.200495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.200539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.200684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.200702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.200775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.200791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.200941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.200958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.201949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.201983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.202164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.202207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.202322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.202356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.202544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.202562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.202639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.202657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.202754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.202773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.202871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.202888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.203043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.203077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.203199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.203234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.203484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.203517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.203702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.203721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-12-10 05:04:19.203880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-12-10 05:04:19.203914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.204092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.204125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.204415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.204454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.204549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.204566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.204824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.204857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.204980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.205014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.205279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.205315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.205434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.205453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.205604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.205621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.205707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.205724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.205817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.205834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.206048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.206081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.206212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.206248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.206373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.206405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.206596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.206629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.206758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.206791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.206912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.206946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.207876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.207893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.208042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.208060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.208197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.208216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.208359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.208376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.208510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.208527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.208683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.208715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.208955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.208989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.209099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.209141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.209240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.209256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.209341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.209358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.209517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.209534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.209682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.209699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.209845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.209862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.210009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.210027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.210272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.210307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.210422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.210457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.210638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-12-10 05:04:19.210670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-12-10 05:04:19.210850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.210884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.211859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.211892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.212920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.212936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.213175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.213194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.213290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.213326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.213500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.213534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.213654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.213687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.213951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.213985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.214120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.214152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.214360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.214395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.214532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.214565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.214724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.214742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.214906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.214941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.215131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.215177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.215364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.215398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.215573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.215590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.215740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.215775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.215978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.216011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.216205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.216241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.216425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.216605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.216624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.216710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.216726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.216821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.216838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.216991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.217010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.217151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.217193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.217364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.217399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.217586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-12-10 05:04:19.217618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-12-10 05:04:19.217719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.217737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.217873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.217890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.217974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.217991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.218150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.218172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.218256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.218271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.218447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.218481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.218588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.218620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.218739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.218774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.218961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.219164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.219216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.219337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.219370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.219483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.219516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.219619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.219636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.219793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.219878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.220108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.220147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.220290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.220325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.220522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.220541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.220637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.220670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.220875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.220907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.221011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.221044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.221157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.221215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.221359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.221376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.221580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.221599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.221758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.221778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.221873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.221916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.222197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.222233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.222354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.222388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.222558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.222576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.222731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.222750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.222895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.222912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.223002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.223018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.223172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.223190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.223336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.223354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.223502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-12-10 05:04:19.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-12-10 05:04:19.223605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.223741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.223758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.223969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.224008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.224212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.224258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.224488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.224507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.224584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.224600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.224752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.224770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.224949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.224982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.225154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.225200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.225381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.225398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.225485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.225503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.225706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.225740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.225932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.225966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.226156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.226201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.226313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.226347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.226522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.226555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.226759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.226793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.226921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.226954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.227193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.227231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.227408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.227443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.227549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.227582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.227709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.227744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.227932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.227965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.228175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.228210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.228393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.228411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.228639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.228673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.228795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.228828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.229006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.229039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.229233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.229267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.229449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.229494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.229681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.229715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.229903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.229938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.230122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.230154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.230412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.230429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.230589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.230606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.230699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.230737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.230862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.230895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.231030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.231065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.231238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.231274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.231403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-12-10 05:04:19.231436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-12-10 05:04:19.231632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.231854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.231871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.232030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.232047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.232228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.232264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.232509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.232541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.232661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.232678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.232837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.232854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.233001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.233034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.233165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.233209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.233328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.233361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.233485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.233518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.233710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.233729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.233829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.233862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.234151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.234387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.234421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.234499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.234515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.234602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.234619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.234710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.234729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.234864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.234882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.235074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.235108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.235294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.235329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.235514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.235547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.235746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.235779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.235909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.235942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.236188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.236222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.236337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.236356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.236589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.236607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.236697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.236713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.236850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.236868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.237014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.237031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.237211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.237230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.237382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.237400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.237546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.237562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.237661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.237678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.237776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.237816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.238010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.238043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.238228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.238262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.238384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.238419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.238524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.238557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-12-10 05:04:19.238741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-12-10 05:04:19.238759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.238852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.238868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.238947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.238963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.239102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.239119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.239202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.239220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.239370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.239388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.239619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.239637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.239845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.239861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.240901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.240935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.241044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.241077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.241198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.241234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.241425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.241465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.241582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.241615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.241808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.241825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.241906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.241921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.242094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.242111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.242192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.242209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.242305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.242321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.242533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.242566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.242804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.242838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.243051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.243201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.243441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.243667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.243790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.243889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.243992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.244025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.244137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.244202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.244384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.244418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.244603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.244637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.244761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.244794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.244920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.244955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.245142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-12-10 05:04:19.245188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-12-10 05:04:19.245310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.245343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.245470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.245503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.245741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.245773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.245959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.245992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.246129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.246257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.246302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.246564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.246596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.246778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.246810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.246929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.246962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.247140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.247183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.247303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.247336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.247525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.247756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.247789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.248027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.248060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.248232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.248266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.248380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.248415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.248594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.248628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.248798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.248816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.248895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.248911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.249928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.249963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.250072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.250105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.250225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.250259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.250377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.250411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.250652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.250669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.250806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.250823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.250973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.251005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.251263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.251298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.251401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.251573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.251612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.251718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.251752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.251873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.251907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.252214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.252251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-12-10 05:04:19.252424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-12-10 05:04:19.252457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.252702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.252736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.252922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.252940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.253123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.253290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.253386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.253542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.253700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.253812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.253981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.254014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.254138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.254181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.254340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.254375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.254558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.254599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.254803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.254820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.254956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.254974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.255049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.255065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.255175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.255194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.255268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.255284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.255426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.255443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.255609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.255644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.255882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.255914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.256090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.256124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.256322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.256356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.256558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.256591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.256707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.256740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.256992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.257026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.257156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.257205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.257338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.257354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.257541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.257764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.257796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.258038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.258072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.258272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.258309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.258537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.258744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.258777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.258883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.258916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.259099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.259138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-12-10 05:04:19.259261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-12-10 05:04:19.259294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.259432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.259465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.259595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.259628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.259751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.259768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.259942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.259959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.260108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.260141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.260293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.260328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.260452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.260485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.260668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.260688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.260855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.260898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.261012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.261046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.261162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.261208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.261452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.261485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.261689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.261723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.261828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.261862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.262046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.262079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.262261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.262298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.262479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.262512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.262632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.262666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.262910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.262943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.263046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.263080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.263293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.263329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.263459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.263491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.263622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.263655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.263860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.263893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.264003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.264037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.264162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.264214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.264473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.264491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.264633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.264652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.264738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.264753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.264901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.264976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.265315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.265387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.265559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.265580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.265749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.265766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.265868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.265900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.266133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.266178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.266370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.266403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.266587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.266604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.266678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.266709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.266913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-12-10 05:04:19.266946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-12-10 05:04:19.267080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.267114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.267304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.267338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.267519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.267553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.267662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.267696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.267960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.267993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.268976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.268993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.269185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.269219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.269323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.269361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.269473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.269508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.269723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.269741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.269882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.269915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.270053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.270086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.270221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.270256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.270381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.270414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.270655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.270687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.270934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.270968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.271073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.271105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.271341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.271376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.271537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.271805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.271838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.271958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.271991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.272208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.272243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.272448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.272483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.272745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.272763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.272866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.272899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.273032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.273065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.273187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.273222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.273416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.273450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.273628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.273646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.273879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.273913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.274176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.274211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.274386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.274421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.274599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-12-10 05:04:19.274616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-12-10 05:04:19.274853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.274885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.275000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.275034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.275146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.275191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.275368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.275402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.275547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.275580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.275709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.275744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.275933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.275966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.276086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.276121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.276271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.276307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.276483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.276518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.276647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.276680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.276851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.277062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.277098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.277269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.277307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.277544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.277576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.277766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.277802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.277971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.278005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.278197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.278231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.278360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.278377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.278604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.278621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.278763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.278781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.278863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.278879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.279054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.279071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.279192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.279225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.279448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.279483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.279588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.279624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.279800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.279833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.280011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.280044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.280244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.280280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.280469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.280503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.280680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.280713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.280819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.280853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.281017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.281035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.281189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.281224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.281442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.281475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.281645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.281678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.281915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.281932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.282008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.282023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.282115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-12-10 05:04:19.282272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-12-10 05:04:19.282291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.282519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.282612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.282628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.282730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.282750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.282897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.282914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.282985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.283230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.283336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.283429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.283601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.283698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.283818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.283852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.284023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.284056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.284350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.284384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.284500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.284517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.284656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.284674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.284813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.284831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.284918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.284934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.285964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.285996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.286226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.286261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.286436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.286471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.286648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.286665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.286782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.286814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.286937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.286977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.287185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.287219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.287358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.287391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.287502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.287536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.287656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.287689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-12-10 05:04:19.287864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-12-10 05:04:19.287898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.288029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.288063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.288255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.288289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.288460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.288477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.288631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.288649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.288736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.288753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.288910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.288927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.289929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.289946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.290099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.290118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.290201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.290220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.290386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.290418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.290606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.290639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.290759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.290792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.290903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.290935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.291109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.291144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.291364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.291495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.291529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.291632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.291666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.291875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.291908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.292038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.292071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.292322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.292358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.292597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.292615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.292848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.292865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.292943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.292959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.293152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.293297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.293330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.293441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.293474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.293661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.293702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.293864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.293882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.293968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.294006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.294131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.294163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.294304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.294338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.294523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-12-10 05:04:19.294558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-12-10 05:04:19.294744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.294776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.294879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.294913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.295037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.295071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.295261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.295295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.295518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.295552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.295741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.295775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.295955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.295987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.296117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.296150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.296287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.296321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.296561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.296594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.296706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.296739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.296844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.296876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.296995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.297027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.297235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.297270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.297438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.297470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.297709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.297727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.297836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.298054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.298086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.298269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.298304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.298491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.298523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.298703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.298737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.298863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.298881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.299095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.299113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.299311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.299333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.299414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.299430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.299573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.299590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.299742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.299760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.299916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.299949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.300121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.300155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.300361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.300393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.300502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.300518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.300604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.300618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.300737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.300770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.300884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.301135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.301276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.301308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.301499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.301534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.301803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.301837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.301963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.301996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.296 [2024-12-10 05:04:19.302175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.296 [2024-12-10 05:04:19.302210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.296 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.302384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.302417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.302584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.302602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.302767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.302785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.302920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.302937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.303081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.303309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.303327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.303480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.303498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.303645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.303677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.303789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.303822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.303933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.303967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.304067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.304105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.304350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.304385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.304576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.304611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.304729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.304762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.304883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.304925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.305016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.305187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.305206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.305428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.305461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.305562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.305596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.305770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.305803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.305909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.305941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.306133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.306176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.306370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.306403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.306517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.306550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.306744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.306778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.306906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.306926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.307904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.307937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.308041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.308074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.308208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.308245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.308376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.308410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.308544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.308576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.308841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.308881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.309088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.297 [2024-12-10 05:04:19.309121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.297 qpair failed and we were unable to recover it. 00:27:28.297 [2024-12-10 05:04:19.309398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.309433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.309691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.309709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.309851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.309885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.310019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.310051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.310295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.310330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.310438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.310471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.310659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.310692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.310890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.310907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.311109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.311329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.311348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.311484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.311500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.311635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.311653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.311822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.311864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.311989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.312021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.312136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.312178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.312364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.312399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.312651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.312685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.312804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.312836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.312953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.312969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.313050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.313066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.313138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.313370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.313606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.313637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.313874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.313891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.314045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.314076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.314202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.314237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.314444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.314479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.314661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.314695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.314872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.314889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.314965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.314980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.315067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.315083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.315221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.315241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.315393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.315409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.315495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.315512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.315663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.298 [2024-12-10 05:04:19.315680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.298 qpair failed and we were unable to recover it. 00:27:28.298 [2024-12-10 05:04:19.315787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.315803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.315887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.315902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.315994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.316011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.316159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.316185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.316254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.316270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.316470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.316488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.316631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.316662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.316905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.316937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.317131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.317164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.317361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.317394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.317553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.317688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.317705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.317786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.317802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.317958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.317975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.318058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.318103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.318393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.318428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.318532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.318566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.318772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.318789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.318948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.318966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.319119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.319136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.319238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.319255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.319348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.319365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.319510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.319527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.319685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.319723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.319905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.319938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.320127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.320160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.320340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.320376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.320480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.320513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.320673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.320693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.320830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.320847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.320997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.321014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.321155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.321189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.321326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.321343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.321492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.321509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.321593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.321631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.321892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.321926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.322112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.322145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.322326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.322357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.322542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.299 [2024-12-10 05:04:19.322575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.299 qpair failed and we were unable to recover it. 00:27:28.299 [2024-12-10 05:04:19.322758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.322790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.322910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.322943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.323137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.323181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.323364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.323397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.323525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.323556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.323817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.323850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.324095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.324128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.324348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.324382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.324617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.324636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.324716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.324732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.324827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.324844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.325073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.325106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.325251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.325286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.325422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.325455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.325626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.325644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.325783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.325800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.325953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.325971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.326056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.326073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.326216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.326234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.326340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.326379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.326486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.326517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.326700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.326732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.326965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.326982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.327130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.327150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.327373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.327411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.327516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.327547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.327650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.327683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.327866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.327899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.328946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.328979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.329095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.329129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.329322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.329356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.329550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.329569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.329720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.329752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.329930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.329963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.330069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.330102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.330362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.300 [2024-12-10 05:04:19.330396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.300 qpair failed and we were unable to recover it. 00:27:28.300 [2024-12-10 05:04:19.330523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.330539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.330764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.330796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.330929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.330962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.331100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.331132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.331322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.331356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.331533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.331567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.331759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.331793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.331965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.331983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.332198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.332231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.332406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.332440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.332680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.332713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.332888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.332919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.333091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.333109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.333253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.333273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.333427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.333445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.333604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.333637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.333749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.333781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.333955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.333987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.334949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.334983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.335183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.335218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.335404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.335438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.335558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.335593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.335689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.335723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.335927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.335961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.336131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.336176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.336438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.336471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.336718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.336753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.336950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.336967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.337045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.337060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.337239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.337257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.337413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.337431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.337599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.337617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.337706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.337739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.337984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.338017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.338202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.338237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.338428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.338461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.301 [2024-12-10 05:04:19.338632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.301 [2024-12-10 05:04:19.338665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.301 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.338920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.338953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.339138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.339181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.339353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.339391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.339581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.339614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.339744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.339777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.339888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.339906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.339985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.340920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.340989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.341006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.341233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.341521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.341555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.341742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.341759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.341898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.341928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.342133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.342176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.342371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.342406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.342528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.342569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.342639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.342654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.342734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.342750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.342973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.343131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.343527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.343680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.343787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.343962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.343979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.344200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.344218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.344301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.344335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.344520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.344553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.344669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.344703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.344806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.344838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.345012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.345029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.345257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.345276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.345357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.345372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.345453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.345468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.345548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.345564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.302 qpair failed and we were unable to recover it. 00:27:28.302 [2024-12-10 05:04:19.345632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.302 [2024-12-10 05:04:19.345649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.345868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.345885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.346894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.346910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.347913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.347932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.348159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.348183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.348261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.348277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.348371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.348401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.348524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.348558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.348679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.348711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.348881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.348913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.349088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.349121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.349237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.349270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.349513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.349547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.349760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.349794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.349911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.349943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.350051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.350084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.350295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.350330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.350587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.350670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.303 [2024-12-10 05:04:19.350889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.303 [2024-12-10 05:04:19.350929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.303 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.351159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.351213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.351373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.351394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.351468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.351483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.351633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.351651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.351734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.351749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.351920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.352002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.352017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.352100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.352118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.352372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.352408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.352609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.352643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.352763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.352796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.352980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.352996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.353086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.353201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.353361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.353540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.353719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.353819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.353977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.354051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.354195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.354237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.354419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.354453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.354562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.354760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.354796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.355008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.355043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.355229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.355248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.355460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.355479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.355651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.355685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.355869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.355902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.356094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.356127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.356351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.356385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.356507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.356539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.356719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.356736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.356837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.356853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.356993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.357011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.357096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.357111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.357210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.357228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.357388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.357406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.357478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.357511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.357631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.304 [2024-12-10 05:04:19.357664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.304 qpair failed and we were unable to recover it. 00:27:28.304 [2024-12-10 05:04:19.357846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.357879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.358145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.358206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.358394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.358428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.358629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.358662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.358832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.358866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.359804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.359836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.360050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.360083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.360280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.360567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.360599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.360761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.360778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.360860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.360876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.361075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.361261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.361414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.361573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.361719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.361855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.361970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.362201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.362375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.362523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.362665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.362769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.362894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.362911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.363133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.363149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.363331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.363365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.363565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.363599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.363771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.363974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.363992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.364230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.364250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.364413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.364429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.364519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.364535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.364613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.364629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.305 [2024-12-10 05:04:19.364698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.305 [2024-12-10 05:04:19.364714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.305 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.364862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.364883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.364980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.364995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.365140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.365194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.365305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.365339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.365470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.365502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.365682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.365880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.365898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.365980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.365995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.366204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.366223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.366404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.366422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.366510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.366526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.366599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.366614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.366771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-12-10 05:04:19.366804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-12-10 05:04:19.366999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.367032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.367221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.367254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.367444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.367478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.367752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.367784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.367912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.368101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.368202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.368391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.368480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.368585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.368770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.368974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.369006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.369146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.369191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.369313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.369345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.369480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.369518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.369706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.369739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.369923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.369956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.370075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.370108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.370345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.370378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.370621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.370653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.370851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.370932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.370948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.371225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.371260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.371454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.371488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.371632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.371663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.371792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.371823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.372029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.372048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.372225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.372243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.372392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.372431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.372603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.372637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-12-10 05:04:19.372832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-12-10 05:04:19.372851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.373018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.373035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.373206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.373240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.373369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.373576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.373609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.373867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.373901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.374087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.374121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.374256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.374289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.374426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.374459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.374720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.374753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.374993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.375025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.375221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.375261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.375379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.375412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.375532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.375565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.375832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.375865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.375968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.375985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.376899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.376931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.377106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.377137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.377368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.377402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.377611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.377686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.377827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.377864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.377975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.378008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.378259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.378296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.378475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.378508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.378698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.378731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-12-10 05:04:19.378829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-12-10 05:04:19.378846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.378987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.379005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.379172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.379189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.379306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.379340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.379517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.379549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.379728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.379760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.380034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.380052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.380221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.380238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.380340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.380375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.380548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.380581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.380712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.380745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.380924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.380941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.381024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.381039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.381121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.381155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.381352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.381387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.381557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.381591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.381705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.381722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.381860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.381879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.382028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.382060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.382186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.382322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.382356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.382559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.382598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.382849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.382884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.383093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.383127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.383376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.383411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.383533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.383567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.383746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.383890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.383924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.384114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.384147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.384286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.384320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.384613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.384645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.384830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.384863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-12-10 05:04:19.385075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-12-10 05:04:19.385108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.385228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.385250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.385338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.385354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.385524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.385543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.385767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.385799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.385925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.385959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.386081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.386112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.386234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.386268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.386381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.386414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.386619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.386651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.386809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.386826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-12-10 05:04:19.386916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-12-10 05:04:19.386932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.387015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.387031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.387189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.387209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.387430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.387462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.387653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.387684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.387830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.387866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.387974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.388009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.388186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.388221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.388401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-10 05:04:19.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-10 05:04:19.388563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.388597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.388800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.388833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.388938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.388955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.389129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.389146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.389297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.389315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.389475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.389506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.389625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.389658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.389782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.389944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.389978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.390231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.390303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.390488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.390561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.390757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.390793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.390903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.390923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.391163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.391189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.391395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.391413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.391516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.391532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.391764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.391798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.391999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.392033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.392206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.392241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.392416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.392451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.392579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.392610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.392733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.392766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.392944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.392977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.393251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.393325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.393519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.393555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.393737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.393772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.393948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.393968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.394056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.394072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.394298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.394317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.394464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.394483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.394577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.394592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.394682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.394699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.394932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.394966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.395145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.395190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.395455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.395489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.395602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.395634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.395829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.395861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.396037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.396054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.396129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-10 05:04:19.396145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-10 05:04:19.396256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.396274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.396429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.396446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.396600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.396618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.396774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.396813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.397081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.397114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.397260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.397295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.397420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.397454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.397580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.397612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.397726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.397744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.397958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.397991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.398205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.398241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.398355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.398391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.398507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.398541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.398661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.398695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.398828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.398862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.399034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.399067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.399196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.399231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.399331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.399350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.399503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.399521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.399692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.399709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.399850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.399868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.400019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.400037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.400299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.400333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.400441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.400474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.400716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.400749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.400971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.401124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.401344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.401585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.401740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.401848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.401950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.401967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.402110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.402146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-10 05:04:19.402371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-10 05:04:19.402405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.402583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.402618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.402747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.402779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.402891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.402923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.403182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.403218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.403331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.403371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.403476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.403510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.403624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.403658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.403900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.403932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.404053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.404086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.404212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.404248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.404529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.404563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.404766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.404799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.404907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.404938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.405126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.405158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.405305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.405340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.405462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.405495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.405657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.405899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.405931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.406119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.406152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.406269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.406303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.406419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.406453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.406564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.406598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.406776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.406809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.406914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.406946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.407137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.407180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.407301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.407333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.407467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.407686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.407719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.407894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.407928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.408041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.408072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.408292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.408475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.408514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.408793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.408825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.408998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.409031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.409275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.409293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.409449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.409467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.409651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.409668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.409827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.409844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.409989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.647 [2024-12-10 05:04:19.410007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-10 05:04:19.410098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.410113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.410198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.410216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.410386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.410420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.410596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.410629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.410740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.410774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.410888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.410920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.411178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.411196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.411341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.411359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.411496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.411513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.411689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.411706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.411789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.411806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.411904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.411921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.412126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.412143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.412304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.412323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.412414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.412430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.412606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.412765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.412798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.412925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.412957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.413190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.413225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.413401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.413443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.413560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.413593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.413772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.413805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.413996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.414029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.414229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.414264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.414440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.414474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.414721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.414754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.414959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.414993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.415215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.415249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.415444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.415476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.415661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.415695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.415809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.415842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.415956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.415989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.416094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.416127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.416391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.416464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.416689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.416726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.416914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.416950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.417080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.417114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.417309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.417345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.417485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.648 [2024-12-10 05:04:19.417518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-10 05:04:19.417609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.417630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.417846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.417880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.417986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.418021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.418203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.418238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.418365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.418399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.418505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.418538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.418727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.418763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.419002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.419042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.419163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.419187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.419357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.419400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.419505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.419539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.419714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.419747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.419983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.420016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.420209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.420244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.420501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.420534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.420717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.420751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.420950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.420983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.421149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.421174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.421326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.421364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.421535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.421570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.421685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.421700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.421798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.421815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.422047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.422079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.422255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.422289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.422421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.422456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.422649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.422682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.422856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.422888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.423095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.423128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.423270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.423305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.423505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.423538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.423717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.423752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.423948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.423966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.424210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.424244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.424497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.424531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.424652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.424685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.424968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.425001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.425127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.425145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.425225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.425243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.425394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.425412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.649 [2024-12-10 05:04:19.425564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.649 [2024-12-10 05:04:19.425607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.649 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.425795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.425827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.426086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.426120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.426233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.426252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.426502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.426520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.426603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.426619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.426769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.426787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.426885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.426903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.427126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.427142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.427233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.427251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.427350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.427366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.427645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.427678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.427794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.427828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.427936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.427970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.428095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.428128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.428326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.428343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.428412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.428428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.428585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.428603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.428750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.428769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.428859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.428876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.429030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.429062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.429263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.429300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.429421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.429455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.429596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.429631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.429804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.429838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.429970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.430206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.430304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.430468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.430576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.430740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.430946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.430978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.431178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.431197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.431334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.431351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.431438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.650 [2024-12-10 05:04:19.431454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.650 qpair failed and we were unable to recover it. 00:27:28.650 [2024-12-10 05:04:19.431594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.431611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.431820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.431860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.432123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.432155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.432298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.432334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.432603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.432637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.432924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.432959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.433193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.433211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.433365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.433382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.433538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.433555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.433694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.433711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.433857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.433875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.434027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.434059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.434183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.434217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.434338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.434374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.434495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.434528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.434724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.434758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.435880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.435914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.436953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.436968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.437036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.437053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.437203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.437238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.437466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.437499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.437730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.437763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.437902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.437934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.438057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.438074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.438246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.438265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.438417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.438434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.651 [2024-12-10 05:04:19.438516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.651 [2024-12-10 05:04:19.438532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.651 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.438603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.438619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.438842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.438861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.438951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.438966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.439047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.439064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.439175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.439210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.439330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.439363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.439547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.439581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.439770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.439804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.439929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.439946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.440019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.440036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.440207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.440226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.440379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.440396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.440543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.440561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.440720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.440752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.440865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.441007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.441041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.441236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.441272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.441402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.441437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.441611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.441646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.441752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.441784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.441967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.442000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.442189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.442226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.442335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.442369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.442605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.442638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.442764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.442799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.442987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.443019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.443141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.443184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.443356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.443390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.443571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.443604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.443731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.443764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.443940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.443974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.444106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.444140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.444287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.444321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.444498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.444531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.444638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.444671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.444885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.444920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.445111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.445144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.445312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.445330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.445543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.652 [2024-12-10 05:04:19.445577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.652 qpair failed and we were unable to recover it. 00:27:28.652 [2024-12-10 05:04:19.445758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.445792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.445917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.445959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.446920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.446936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.447873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.447905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.448041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.448225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.448320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.448439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.448576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.448854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.448979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.449191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.449352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.449501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.449651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.449868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.449967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.449983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.450889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.450907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.451002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.451020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.451093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.451110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.451212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.451231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.653 [2024-12-10 05:04:19.451309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.653 [2024-12-10 05:04:19.451324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.653 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.451414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.451430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.451515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.451533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.451615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.451631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.451705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.451722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.451799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.451819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.452060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.452093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.452286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.452321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.452625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.452659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.452778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.452796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.452980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.452997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.453107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.453140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.453353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.453388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.453512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.453544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.453718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.453752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.453987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.454004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.454148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.454173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.454317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.454360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.454557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.454590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.454767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.454801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.454932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.454965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.455137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.455181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.455389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.455407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.455554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.455571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.455711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.455729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.455882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.455899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.455972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.455988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.456263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.456297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.456421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.456455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.456631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.456664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.456852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.456887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.457059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.457093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.457344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.457366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.457520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.457538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.457690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.457707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.457947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.457980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.458111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.458144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.458351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.458386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.458569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.458603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.458779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.654 [2024-12-10 05:04:19.458812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.654 qpair failed and we were unable to recover it. 00:27:28.654 [2024-12-10 05:04:19.458947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.458980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.459101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.459135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.459375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.459411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.459528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.459562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.459760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.459794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.459982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.459999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.460081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.460098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.460246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.460280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.460405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.460439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.460613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.460646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.460832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.461139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.461183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.461389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.461408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.461591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.461625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.461755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.461789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.462036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.462069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.462181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.462199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.462427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.462463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.462646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.462678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.462862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.462896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.463082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.463100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.463248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.463266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.463412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.463429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.463514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.463546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.463787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.463822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.464001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.464035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.464212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.464231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.464381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.464400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.464549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.464566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.464721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.464754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.464951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.465187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.465234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.465317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.465335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.465530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.465564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.465736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.465772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.655 [2024-12-10 05:04:19.465907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.655 [2024-12-10 05:04:19.465941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.655 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.466128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.466146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.466226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.466243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.466497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.466530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.466635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.466667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.466791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.466825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.467028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.467046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.467207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.467226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.467367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.467384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.467569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.467603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.467798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.467831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.467971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.467988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.468133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.468151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.468320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.468353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.468470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.468503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.468708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.468743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.468860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.468901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.469106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.469124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.469264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.469282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.469422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.469441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.469594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.469611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.469699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.469744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.469951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.469983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.470184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.470220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.470337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.470356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.470515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.470533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.470774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.470808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.470921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.470954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.471080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.471115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.471247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.471266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.471437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.471454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.471528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.471544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.471641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.471657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.471869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.471902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.472015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.472048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.472231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.472267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.472444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.472478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.472682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.472716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.472843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.472876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.656 [2024-12-10 05:04:19.473051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.656 [2024-12-10 05:04:19.473070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.656 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.473189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.473223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.473353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.473388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.473516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.473549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.473665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.473698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.473802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.473835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.473951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.473983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.474981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.474997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.475073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.475089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.475228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.475248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.475405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.475438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.475625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.475660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.475787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.475820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.475935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.475953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.476143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.476160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.476331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.476367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.476573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.476606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.476784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.476817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.476946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.476964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.477035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.477051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.477209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.477228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.477309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.477326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.477484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.477501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.477656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.477675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.477933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.478057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.478091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.478282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.478318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.478443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.478478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.478607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.478640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.478776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.478811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.478990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.479024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.479218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.479253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.479518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.657 [2024-12-10 05:04:19.479553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.657 qpair failed and we were unable to recover it. 00:27:28.657 [2024-12-10 05:04:19.479812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.479845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.480120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.480153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.480403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.480438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.480708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.480742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.480934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.480969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.481210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.481246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.481512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.481544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.481729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.481762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.482001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.482019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.482195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.482231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.482376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.482410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.482591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.482625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.482752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.482785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.482906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.482945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.483123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.483141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.483228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.483244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.483438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.483471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.483645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.483679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.483818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.483852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.484042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.484061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.484235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.484271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.484385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.484418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.484540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.484573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.484752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.484785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.484971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.485186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.485455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.485543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.485702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.485857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.485962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.485978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.486878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.486984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.487015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.658 qpair failed and we were unable to recover it. 00:27:28.658 [2024-12-10 05:04:19.487115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.658 [2024-12-10 05:04:19.487144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.487285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.487318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.487503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.487540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.487723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.487754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.487925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.487954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.488089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.488241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.488337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.488536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.488717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.488856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.488978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.489007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.489185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.489216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.489400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.489432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.489623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.489652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.489848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.489892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.490072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.490227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.490394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.490547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.490646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.490895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.490986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.491886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.491904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.492052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.492067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.492154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.492209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.492404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.492436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.492679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.492710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.492913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.492944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.659 qpair failed and we were unable to recover it. 00:27:28.659 [2024-12-10 05:04:19.493137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.659 [2024-12-10 05:04:19.493182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.493363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.493380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.493461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.493477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.493559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.493577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.493809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.493843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.494041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.494075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.494188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.494223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.494468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.494485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.494576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.494613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.494744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.494777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.494898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.494931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.495965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.495981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.496134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.496152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.496261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.496279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.496357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.496377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.496630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.496647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.496736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.496753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.496910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.496928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.497019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.497245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.497280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.497409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.497442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.497648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.497680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.497872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.497908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.498015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.498048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.498271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.498306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.498494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.498526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.498704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.498737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.498903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.498922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.499021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.499038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.499185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.499203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.499410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.499429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.499587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.660 [2024-12-10 05:04:19.499603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.660 qpair failed and we were unable to recover it. 00:27:28.660 [2024-12-10 05:04:19.499764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.499797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.499909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.499942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.500063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.500096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.500277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.500312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.500562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.500579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.500650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.500666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.500845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.500863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.501019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.501037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.501115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.501133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.501291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.501334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.501452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.501487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.501617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.501650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.501782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.501814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.502912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.502929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.503872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.503904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.504108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.504141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.504258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.504292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.504415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.504448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.504623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.504656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.504845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.504878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.505051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.505085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.505255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.505273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.505376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.505394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.505476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.505494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.505647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.505688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.505807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.661 [2024-12-10 05:04:19.505839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.661 qpair failed and we were unable to recover it. 00:27:28.661 [2024-12-10 05:04:19.506033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.506065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.506248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.506267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.506357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.506373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.506449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.506465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.506568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.506600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.506705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.506738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.506982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.507016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.507199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.507217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.507360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.507397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.507594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.507627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.507813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.507846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.508918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.508936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.509031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.509197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.509290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.509422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.509508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.509797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.509969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.510929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.510946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.511037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.511054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.511263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.511281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.511510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.511527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.511599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.511642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.511771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.511804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.511994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.512027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.512149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.512195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.662 qpair failed and we were unable to recover it. 00:27:28.662 [2024-12-10 05:04:19.512366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.662 [2024-12-10 05:04:19.512383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.512537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.512554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.512657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.512675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.512761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.512803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.512991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.513024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.513143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.513186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.513447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.513479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.513668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.513702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.513989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.514022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.514132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.514150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.514304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.514322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.514415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.514455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.514561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.514594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.514699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.514734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.514981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.515014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.515214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.515249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.515359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.515394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.515501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.515534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.515638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.515671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.515798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.515831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.516096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.516131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.516276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.516310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.516499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.516516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.516616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.516650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.516833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.516865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.517094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.517285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.517389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.517634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.517724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.517882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.517983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.518093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.518198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.518362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.518600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.518710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.518831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.518848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.519024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.519042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.519122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.663 [2024-12-10 05:04:19.519139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.663 qpair failed and we were unable to recover it. 00:27:28.663 [2024-12-10 05:04:19.519244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.519950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.519965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.520051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.520069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.520158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.520196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.520265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.520281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.520450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.520468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.520618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.520651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.520839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.520871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.521901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.521934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.522061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.522094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.522215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.522250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.522374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.522407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.522521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.522553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.522684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.522717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.522916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.522949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.523955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.523971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.524050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.524068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.664 [2024-12-10 05:04:19.524211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.664 [2024-12-10 05:04:19.524229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.664 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.524326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.524343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.524431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.524465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.524657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.524690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.524804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.524837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.524962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.525956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.525973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.526976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.526997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.527088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.527105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.527184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.527202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.527364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.527507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.527524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.527690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.527707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.527866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.527883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.528055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.528074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.528158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.528227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.528357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.528391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.528593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.528628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.528802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.528835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.528956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.528988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.529103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.529137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.529429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.529476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.529603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.529635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.529855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.529889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.530084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.530118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.530257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.530291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.530442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.530461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.530543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.665 [2024-12-10 05:04:19.530561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.665 qpair failed and we were unable to recover it. 00:27:28.665 [2024-12-10 05:04:19.530643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.530660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.530750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.530767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.530906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.530923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.531886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.531904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.532855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.532888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.533006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.533039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.533229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.533264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.533378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.533395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.533583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.533617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.533790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.533824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.534068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.534101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.534214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.534232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.534370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.534387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.534634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.534651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.534811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.534829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.534935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.534967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.535154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.535199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.535324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.535357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.535527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.535560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.535673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.535707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.535868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.535901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.536038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.536070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.536227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.536269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.536349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.536367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.536520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.536537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.666 qpair failed and we were unable to recover it. 00:27:28.666 [2024-12-10 05:04:19.536635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.666 [2024-12-10 05:04:19.536654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.536802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.536835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.537046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.537253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.537383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.537554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.537675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.537770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.537983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.538147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.538260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.538424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.538566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.538722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.538935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.538968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.539160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.539205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.539343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.539375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.539562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.539597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.539774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.539806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.539909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.539941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.540058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.540091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.540286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.540321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.540564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.540580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.540676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.540693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.540902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.540921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.541081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.541297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.541335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.541468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.541500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.541629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.541790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.541823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.541938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.541970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.542945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.542978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.543153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.543196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.667 [2024-12-10 05:04:19.543304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.667 [2024-12-10 05:04:19.543337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.667 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.543547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.543580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.543752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.543784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.543988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.544146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.544354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.544612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.544714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.544867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.544886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.545093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.545110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.545249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.545268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.545361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.545379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.545530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.545564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.545745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.545778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.545914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.545947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.546132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.546201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.546319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.546353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.546458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.546492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.546610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.546643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.546747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.546781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.546966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.546999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.547188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.547224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.547334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.547375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.547500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.547533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.547723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.547757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.547870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.547903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.548086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.548119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.548305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.548341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.549876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.549910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.550144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.550163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.550270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.550287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.550456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.550475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.550647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.550679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.550802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.550836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.551078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.668 [2024-12-10 05:04:19.551116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.668 qpair failed and we were unable to recover it. 00:27:28.668 [2024-12-10 05:04:19.551283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.551334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.551490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.551534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.551741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.551787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.552055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.552107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.552317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.552390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.552607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.552644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.552820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.552856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.553043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.553076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.553269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.553305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.553417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.553449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.553572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.553608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.553819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.553853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.554102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.554136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.554331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.554367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.554554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.554597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.554782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.554816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.554934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.554969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.555059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.555235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.555309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.555432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.555466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.555576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.555597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.555683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.555698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.555834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.555851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.556708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.556725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.557165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.557205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.557371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.557389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.557473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.557489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.557574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.557591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.557665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.557682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.557862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.557881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.558027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.558044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.558873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.558905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.559069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.669 [2024-12-10 05:04:19.559088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.669 qpair failed and we were unable to recover it. 00:27:28.669 [2024-12-10 05:04:19.559229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.559248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.559341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.559359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.559537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.559579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.559709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.559742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.559849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.559883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.560015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.560049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.560306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.560324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.560555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.560573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.560664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.560681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.560764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.560780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.560863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.560881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.561909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.561987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.562946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.562964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.563982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.563999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.564082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.564098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.564189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.564207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.564361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.564380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.564463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.564480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.670 [2024-12-10 05:04:19.564621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.670 [2024-12-10 05:04:19.564639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.670 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.564725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.564741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.564816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.564832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.565055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.565074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.565186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.565206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.565349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.565366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.565583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.565601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.565753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.565770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.565885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.566876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.566894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.567835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.567852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.568870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.568899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-12-10 05:04:19.569867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.671 [2024-12-10 05:04:19.569884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.569954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.569970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.570939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.570957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.571907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.571922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.572970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.572988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.672 [2024-12-10 05:04:19.573848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-12-10 05:04:19.573947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.573964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.574054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.574073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.574148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.574164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.574410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.574518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.574537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.574625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.574641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.574809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.574826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.575818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.575836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.576969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.576987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.577941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.577958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.673 [2024-12-10 05:04:19.578864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-12-10 05:04:19.578938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.578955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.579960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.579977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.580933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.580950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.581953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.581971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.582903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.582990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.583822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.583838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.674 [2024-12-10 05:04:19.584055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.674 [2024-12-10 05:04:19.584075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.674 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.584962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.584979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.585846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.585986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.586932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.586949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.587923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.587940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.588939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.588956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.589096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.589112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.589223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.675 [2024-12-10 05:04:19.589242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.675 qpair failed and we were unable to recover it. 00:27:28.675 [2024-12-10 05:04:19.589346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.589364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.589450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.589467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.589550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.589567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.589661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.589676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.589770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.589786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.589947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.589964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.590948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.590966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.591059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.591076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.591194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.591215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.591425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.591442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.591542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.591560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.591768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.591785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.591965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.591983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.592973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.592991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.593151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.593175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.593316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.593333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.593473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.593490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.593639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.593656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.593760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.593777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.593858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.593874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.594087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.594159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.594474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.594512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.594718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.594754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.594937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.594970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.595078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.595097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.595199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.676 [2024-12-10 05:04:19.595218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.676 qpair failed and we were unable to recover it. 00:27:28.676 [2024-12-10 05:04:19.595375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.595392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.595542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.595560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.595719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.595736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.595896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.595913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.596003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.596021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.596157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.596180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.596391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.596507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.596524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.596676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.596693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.596775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.596793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.597002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.597019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.597179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.597197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.597356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.597373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.597578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.597595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.597747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.597764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.597861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.598099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.598119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.598258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.598276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.598427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.598444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.598595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.598612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.598685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.598702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.598906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.598923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.599018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.599035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.599130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.599148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.599373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.599448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.599744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.599782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.599910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.599944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.600973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.600990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.677 qpair failed and we were unable to recover it. 00:27:28.677 [2024-12-10 05:04:19.601074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.677 [2024-12-10 05:04:19.601092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.601240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.601258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.601364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.601382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.601466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.601483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.601564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.601582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.601732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.601750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.601953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.601970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.602154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.602179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.602340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.602360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.602449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.602555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.602571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.602652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.602669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.602822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.602840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.603910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.603985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.604853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.604996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.605096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.605216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.605387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.605547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.605728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.605912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.605928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.606032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.606052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.606143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.606160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.606244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.606261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.606343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.606360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.678 [2024-12-10 05:04:19.606478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.678 [2024-12-10 05:04:19.606494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.678 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.606650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.606667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.606746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.606763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.606846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.606863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.606998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.607974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.607993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.608148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.608173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.608327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.608344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.608432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.608449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.608543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.608561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.608713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.608731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.608876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.608894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.609921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.609993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.610151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.610314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.610411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.610638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.610842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.610947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.610964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.679 [2024-12-10 05:04:19.611878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.679 qpair failed and we were unable to recover it. 00:27:28.679 [2024-12-10 05:04:19.611977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.611994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.612958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.612975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.613800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.613819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.614928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.614944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.615185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.615376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.615550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.615650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.615749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.615838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.615986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.616945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.616962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.617031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.617048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.617143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.617161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.680 [2024-12-10 05:04:19.617265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.680 [2024-12-10 05:04:19.617283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.680 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.617432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.617449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.617585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.617602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.617704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.617721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.617793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.617809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.617879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.617895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.617960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.617974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.618920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.618936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.619954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.619971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.620975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.620991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.621092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.621110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.621185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.621200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.621342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.621359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.621503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.621519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.621661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.621678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.681 [2024-12-10 05:04:19.621761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.681 [2024-12-10 05:04:19.621778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.681 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.621868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.621885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.621954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.621970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.622883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.622901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.623873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.623909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12600f0 (9): Bad file descriptor 00:27:28.682 [2024-12-10 05:04:19.624206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.624279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.624553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.624592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.624771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.624791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.624946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.624963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.625973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.625989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.626942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.626960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.627057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.627215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.627327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.627448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.627544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.682 [2024-12-10 05:04:19.627635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.682 qpair failed and we were unable to recover it. 00:27:28.682 [2024-12-10 05:04:19.627709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.627725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.627802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.627820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.627913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.627929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.628963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.628980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.629128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.629147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.629397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.629416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.629515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.629532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.629598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.629614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.629702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.629720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.629858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.629875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.630860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.630878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.631981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.631998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.632914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.632985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.633000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.633076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.633093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.633180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.633199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.633336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.633354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.633524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.683 [2024-12-10 05:04:19.633541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.683 qpair failed and we were unable to recover it. 00:27:28.683 [2024-12-10 05:04:19.633631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.633648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.633725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.633743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.633981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.634833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.634981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.635836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.635984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.636969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.636987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.637916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.637931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.684 [2024-12-10 05:04:19.638916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.684 [2024-12-10 05:04:19.638934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.684 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.639035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.639052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.639277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.639296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.639375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.639392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.639467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.639485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.639635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.639653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.639889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.639908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.640937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.640955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.641916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.641932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.642100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.642116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.642294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.642313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.642385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.642399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.642575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.642593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.642744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.642761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.642839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.642856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.643041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.643059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.643143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.643161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.643336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.643354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.643566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.643638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.643654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.643814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.643832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.644839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.644993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.645009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.685 qpair failed and we were unable to recover it. 00:27:28.685 [2024-12-10 05:04:19.645161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.685 [2024-12-10 05:04:19.645187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.645263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.645280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.645372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.645389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.645595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.645612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.645686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.645705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.645780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.645797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.645957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.645975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.646198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.646217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.646299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.646316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.646457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.646476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.646619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.646637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.646717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.646735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.646887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.646904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.647087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.647109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.647318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.647337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.647494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.647511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.647585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.647603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.647746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.647763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.647904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.647921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.648888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.648906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.686 [2024-12-10 05:04:19.649104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.686 [2024-12-10 05:04:19.649120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.686 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.649209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.649227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.649442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.649460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.649543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.649561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.649771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.649789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.649932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.649949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.650018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.650035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.650212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.650231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.650331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.650348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.650583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.650600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.650691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.650708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.650802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.650820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.651949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.651966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.652860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.652876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.653970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.653987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.654089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.654250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.654422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.654588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.654684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.654848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.687 qpair failed and we were unable to recover it. 00:27:28.687 [2024-12-10 05:04:19.654995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.687 [2024-12-10 05:04:19.655011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.655966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.655984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.656122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.656139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.656220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.656237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.656457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.656475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.656647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.656664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.656746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.656764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.656917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.656935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.657788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.657806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.658908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.658925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.659962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.659979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.688 [2024-12-10 05:04:19.660071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.688 [2024-12-10 05:04:19.660087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.688 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.660978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.660995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.661907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.661926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.662886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.662902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.663936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.664934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.664950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.665047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.689 [2024-12-10 05:04:19.665065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.689 qpair failed and we were unable to recover it. 00:27:28.689 [2024-12-10 05:04:19.665207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.665846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.665989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.666812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.666988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.667176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.667268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.667422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.667663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.667836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.667937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.667954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.668923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.668941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.669028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.669249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.669431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.669532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.690 [2024-12-10 05:04:19.669706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.690 qpair failed and we were unable to recover it. 00:27:28.690 [2024-12-10 05:04:19.669857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.669875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.670961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.670978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.671067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.671085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.671301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.671320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.671468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.671486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.671591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.671608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.671707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.671724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.671815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.671832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.672061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.672079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.672201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.672227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.672311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.672329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.672431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.672448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.672607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.672623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.672763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.672780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.673946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.673964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.674941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.674958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.675096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.675115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.675264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.675282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.675366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.691 [2024-12-10 05:04:19.675383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.691 qpair failed and we were unable to recover it. 00:27:28.691 [2024-12-10 05:04:19.675471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.675488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.675578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.675595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.675664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.675681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.675770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.675787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.675857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.675874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.676872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.676891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.677887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.677904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.678968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.678986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.679847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.679988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.680006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.680086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.680103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.680319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.692 [2024-12-10 05:04:19.680338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.692 qpair failed and we were unable to recover it. 00:27:28.692 [2024-12-10 05:04:19.680522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.680542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.680704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.680721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.680856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.680874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.680952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.680970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.681908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.681925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.682900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.682993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.683945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.683963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.684929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.684945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.685081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.685098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.685242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.685262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.685332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.685347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.693 [2024-12-10 05:04:19.685481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.693 [2024-12-10 05:04:19.685499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.693 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.685605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.685622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.685782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.685855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.685997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.686033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.686232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.686271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.686436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.686458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.686617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.686636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.686848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.686866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.687964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.687981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.688979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.688997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.689937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.689955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.690905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.690921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.691002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.691019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.694 [2024-12-10 05:04:19.691164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.694 [2024-12-10 05:04:19.691189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.694 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.691913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.691983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.692948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.692966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.693860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.693878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.694866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.694885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.695025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.695043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.695188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.695210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.695369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.695 [2024-12-10 05:04:19.695387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.695 qpair failed and we were unable to recover it. 00:27:28.695 [2024-12-10 05:04:19.695526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.695542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.695754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.695771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.695865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.695881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.695983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.695999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.696153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.696324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.696426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.696603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.696726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.696905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.696999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.697018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.697099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.696 [2024-12-10 05:04:19.697117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.696 qpair failed and we were unable to recover it. 00:27:28.696 [2024-12-10 05:04:19.697276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.697295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.697388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.697406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.697636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.697655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.697790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.697807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.697886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.697903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.697987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.698004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.698160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.698186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.698401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.698418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.698568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.698585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.698757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.698776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.698845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.698860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.699876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.984 [2024-12-10 05:04:19.699894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.984 qpair failed and we were unable to recover it. 00:27:28.984 [2024-12-10 05:04:19.700039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.700138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.700312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.700484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.700648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.700826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.700915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.700931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.701960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.701977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.702130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.702150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.702244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.702264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.702469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.702491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.702644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.702662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.702752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.702773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.702944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.702963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.703973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.703990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.704924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.704994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.705012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.705086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.705103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.985 qpair failed and we were unable to recover it. 00:27:28.985 [2024-12-10 05:04:19.705199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.985 [2024-12-10 05:04:19.705219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.705950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.705967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.706968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.706985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.707965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.707983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.708897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.709051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.709068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.709207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.709225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.709430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.709448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.709671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.709688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.709779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.709800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.709888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.709906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.710056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.710073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.710226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.710243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.986 [2024-12-10 05:04:19.710334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.986 [2024-12-10 05:04:19.710351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.986 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.710490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.710508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.710716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.710733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.710891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.710908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.710993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.711177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.711422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.711512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.711623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.711795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.711903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.711921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.712932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.712949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.713049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.713137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.713262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.713487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.713661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.713827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.713988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.714086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.714334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.714430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.714596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.714768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.714885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.714903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.715967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.987 [2024-12-10 05:04:19.715986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.987 qpair failed and we were unable to recover it. 00:27:28.987 [2024-12-10 05:04:19.716155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.716180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.716318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.716336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.716482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.716499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.716654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.716671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.716742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.716759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.716858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.716876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.717108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.717125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.717218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.717236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.717329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.717516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.717533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.717680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.717698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.717839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.717859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.718971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.718987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.719978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.719995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.720077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.720095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.720312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.720332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.720477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.720495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.720637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.720655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.720737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.720753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.720917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.720934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.988 qpair failed and we were unable to recover it. 00:27:28.988 [2024-12-10 05:04:19.721114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.988 [2024-12-10 05:04:19.721133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.721389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.721407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.721494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.721662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.721679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.721770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.721788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.721956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.721973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.722950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.722967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.723848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.723985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.724967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.724985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.725884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.725990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.726008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.726098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.989 [2024-12-10 05:04:19.726116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.989 qpair failed and we were unable to recover it. 00:27:28.989 [2024-12-10 05:04:19.726198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.726215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.726299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.726317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.726474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.726492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.726576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.726593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.726722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.726740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.726916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.726937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.727976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.727995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.728065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.728082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.728161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.728188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.728325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.728343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.728491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.728508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.728670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.728687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.728858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.728876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.729149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.729333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.729497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.729676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.729782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.729897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.729988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.730886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.730991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.731008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.731149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.731193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.731352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.731371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.731535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.731552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.731703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.731722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.990 qpair failed and we were unable to recover it. 00:27:28.990 [2024-12-10 05:04:19.731805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.990 [2024-12-10 05:04:19.731823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.731926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.731944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.732953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.732971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.733904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.733921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.734918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.734935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.735916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.735934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.736074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.736091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.736243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.736261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.736353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.736371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.736541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.736559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.736702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.736724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.736872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.736888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.737030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.737047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.737122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.737138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.737244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.737262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.737402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-12-10 05:04:19.737418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.991 qpair failed and we were unable to recover it. 00:27:28.991 [2024-12-10 05:04:19.737589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.737606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.737688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.737707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.737858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.737877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.737950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.737967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.738872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.738890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.739959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.739976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.740941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.740960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.741920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.741938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.742198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.742342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.742361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.742447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.742465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.742599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.742617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.742717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.742734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.992 [2024-12-10 05:04:19.742819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-12-10 05:04:19.742837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.992 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.742977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.742996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.743161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.743186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.743336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.743354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.743439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.743456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.743596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.743614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.743702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.743719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.743950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.743968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.744905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.744922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.745956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.745973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.746946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.746963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.747103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.747121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.747216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.747234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.747390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.747408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.747493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.747511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.747669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.747687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.993 [2024-12-10 05:04:19.747772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-12-10 05:04:19.747789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.993 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.747869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.747887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.747972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.747990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.748161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.748356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.748474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.748632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.748734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.748893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.748996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.749868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.749886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.750896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.750914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.751890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.751984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.752961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.994 [2024-12-10 05:04:19.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.994 qpair failed and we were unable to recover it. 00:27:28.994 [2024-12-10 05:04:19.753066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.753179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.753266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.753397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.753560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.753721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.753916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.753934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.754959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.754978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.755979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.755997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.756903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.756921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.757925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.757944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.758084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.758101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.758307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.758326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.758465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.995 [2024-12-10 05:04:19.758490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.995 qpair failed and we were unable to recover it. 00:27:28.995 [2024-12-10 05:04:19.758578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.758595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.758746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.758764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.758846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.758863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.759976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.759993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.760919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.760936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.761961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.761981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.762917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.762934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.763075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.763093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.763267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.763367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.763384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.996 qpair failed and we were unable to recover it. 00:27:28.996 [2024-12-10 05:04:19.763458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.996 [2024-12-10 05:04:19.763475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.763614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.763631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.763713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.763731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.763891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.763909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.764891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.764908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.765908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.765925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.766855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.766873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.767942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.767961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.768036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.997 [2024-12-10 05:04:19.768054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.997 qpair failed and we were unable to recover it. 00:27:28.997 [2024-12-10 05:04:19.768134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.768153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.768307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.768325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.768509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.768528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.768602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.768619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.768767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.768785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.768861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.768879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.769921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.769938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.770033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.770051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.770260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.770279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.770369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.770388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.770526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.770543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.770800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.770818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.770958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.770976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.771927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.771945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.772884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.772902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.773050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.773067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.773207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.773225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.773361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.773380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.773463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.773480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.773553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.773573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.998 [2024-12-10 05:04:19.773645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.998 [2024-12-10 05:04:19.773664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.998 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.773823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.773841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.773986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.774887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.774995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.775914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.775931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.776824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.776842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.777962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.777979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.778851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.778870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.779016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.999 [2024-12-10 05:04:19.779033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:28.999 qpair failed and we were unable to recover it. 00:27:28.999 [2024-12-10 05:04:19.779197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.779217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.779371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.779389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.779465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.779483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.779644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.779663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.779874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.779892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.779962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.779978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.780890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.780907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.781867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.781885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.782919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.782936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.783878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.783894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.784029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.784047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.000 [2024-12-10 05:04:19.784210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.000 [2024-12-10 05:04:19.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.000 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.784978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.784995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.785896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.785913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.786063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.786147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.786325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.786365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.786563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.786597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.786689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.786708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.786866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.786883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.786956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.786974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.787063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.787080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.787286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.787304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.787510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.787529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.787669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.787687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.787757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.787774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.787918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.787992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.788205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.788243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.788452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.788472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.788689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.788708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.788853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.788871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.789977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.789996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.001 qpair failed and we were unable to recover it. 00:27:29.001 [2024-12-10 05:04:19.790164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.001 [2024-12-10 05:04:19.790188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.790265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.790283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.790361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.790381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.790529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.790547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.790689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.790707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.790788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.790806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.790951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.790968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.791922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.791939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.792911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.792928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.793931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.793948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.794090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.794108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.794286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.794303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.794374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.794389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.794528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.794546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.794688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.002 [2024-12-10 05:04:19.794704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.002 qpair failed and we were unable to recover it. 00:27:29.002 [2024-12-10 05:04:19.794844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.794861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.795093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.795111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.795222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.795240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.795442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.795459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.795597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.795614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.795828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.795845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.795989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.796008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.796179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.796198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.796352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.796370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.796506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.796523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.796681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.796698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.796838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.796856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.796997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.797831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.797848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.798934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.798950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.799923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.800106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.800123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.800205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.800223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.800309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.800328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.003 [2024-12-10 05:04:19.800470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.003 [2024-12-10 05:04:19.800488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.003 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.800566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.800583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.800748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.800765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.800901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.800918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.801832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.801983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.802961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.802979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.803856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.803872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.804108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.804125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.804263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.804282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.804427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.804445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.804548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.804566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.804719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.804737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.804991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.805007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.805086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.805104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.805190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.805207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.805418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.805492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.805708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.805747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.805941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.805977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.806157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.806186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.806342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.004 [2024-12-10 05:04:19.806360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.004 qpair failed and we were unable to recover it. 00:27:29.004 [2024-12-10 05:04:19.806499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.806518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.806599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.806617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.806719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.806738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.806811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.806828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.806921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.806939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.807931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.807950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.808954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.808972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.809955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.809973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.810082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.810182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.810341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.810502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.810663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.810836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.810996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.811014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.811100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.811118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.811323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.811340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.005 [2024-12-10 05:04:19.811475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.005 [2024-12-10 05:04:19.811495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.005 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.811573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.811590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.811674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.811692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.811768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.811786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.811880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.811898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.812989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.813081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.813099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.813305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.813323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.813516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.813533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.813782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.813799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.813888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.813905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.814873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.814891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.815938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.815955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.006 qpair failed and we were unable to recover it. 00:27:29.006 [2024-12-10 05:04:19.816026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.006 [2024-12-10 05:04:19.816042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.816974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.816993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.817079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.817096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.817186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.817205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.817430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.817448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.817599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.817617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.817756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.817774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.817864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.817882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.818042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.818060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.818297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.818318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.818532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.818550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.818764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.818783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.818955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.818972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.819066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.819325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.819428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.819626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.819793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.819900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.819998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.820908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.820924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.821048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.821213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.821373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.821558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.821658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.007 [2024-12-10 05:04:19.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.007 qpair failed and we were unable to recover it. 00:27:29.007 [2024-12-10 05:04:19.821848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.821867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.821953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.821971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.822047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.822065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.822203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.822222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.822471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.822489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.822652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.822879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.822898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.822977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.822994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.823972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.823990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.824146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.824165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.824263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.824281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.824432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.824450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.824591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.824608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.824749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.824766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.824913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.824931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.825841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.825988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.826905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.826923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.827020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.827038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.827115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.827132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.008 qpair failed and we were unable to recover it. 00:27:29.008 [2024-12-10 05:04:19.827272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.008 [2024-12-10 05:04:19.827290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.827383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.827401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.827621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.827641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.827726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.827743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.827898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.827915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.828056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.828073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.828228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.828246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.828386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.828403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.828548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.828567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.828705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.828722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.828884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.828901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.829879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.829898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.830832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.830850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.831896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.831913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.832093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.832110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.832256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.832275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.832361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.832379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.832542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.832561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.832703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.009 [2024-12-10 05:04:19.832720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.009 qpair failed and we were unable to recover it. 00:27:29.009 [2024-12-10 05:04:19.832892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.832909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.833966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.833987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.834948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.834966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.835204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.835222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.835377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.835394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.835482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.835499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.835568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.835584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.835787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.835859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.836089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.836126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.836247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.836268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.836485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.836502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.836599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.836617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.836799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.836816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.836953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.836970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.837935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.837952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.838094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.838116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.838273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.838291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.838400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.838417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.838502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.838520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.838672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.838689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.010 qpair failed and we were unable to recover it. 00:27:29.010 [2024-12-10 05:04:19.838766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.010 [2024-12-10 05:04:19.838784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.839920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.839938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.840914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.840932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.841928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.841947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.842969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.842986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.011 qpair failed and we were unable to recover it. 00:27:29.011 [2024-12-10 05:04:19.843123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.011 [2024-12-10 05:04:19.843140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.843902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.843921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.844947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.844965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.845947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.845966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.846951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.846971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.847916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.847933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.848019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.848037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.848116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.012 [2024-12-10 05:04:19.848133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.012 qpair failed and we were unable to recover it. 00:27:29.012 [2024-12-10 05:04:19.848300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.848319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.848398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.848415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.848489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.848506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.848589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.848606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.848688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.848707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.848781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.848798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.849842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.849982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.850839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.850856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.851964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.851981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.852968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.853127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.853145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.853258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.853277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.853509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.853526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.013 [2024-12-10 05:04:19.853616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.013 [2024-12-10 05:04:19.853633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.013 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.853827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.853846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.854015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.854034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.854250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.854268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.854471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.854489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.854648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.854665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.854749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.854766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.854852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.854870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.855940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.855958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.856045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.856063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.856205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.856223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.856320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.856338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.856579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.856597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.856668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.856684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.856910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.856928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.857968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.857986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.858061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.858079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.858236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.858254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.858341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.858358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.858513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.858531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.858619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.858636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.858812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.858831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.859058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.859076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.859179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.859198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.859293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.859311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.859498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.859517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.859665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.859682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.859891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.014 [2024-12-10 05:04:19.859909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.014 qpair failed and we were unable to recover it. 00:27:29.014 [2024-12-10 05:04:19.860064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.860948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.860966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.861917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.861935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.862978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.862995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.863977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.863994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.864132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.864149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.864362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.864444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.864709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.864781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.864910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.015 [2024-12-10 05:04:19.864947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.015 qpair failed and we were unable to recover it. 00:27:29.015 [2024-12-10 05:04:19.865068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.865229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.865335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.865425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.865699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.865812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.865919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.865937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.866979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.866997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.867077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.867094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.867309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.867327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.867464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.867481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.867571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.867588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.867682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.867700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.867877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.867894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.868959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.868976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.869137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.869155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.869367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.869385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.869532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.869550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.869785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.869802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.869942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.869959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.016 [2024-12-10 05:04:19.870800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.016 qpair failed and we were unable to recover it. 00:27:29.016 [2024-12-10 05:04:19.870873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.870890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.871057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.871074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.871214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.871232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.871323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.871340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.871422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.871439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.871646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.871665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.871759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.871776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.872974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.873919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.873936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.874896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.874990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.875940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.875958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.876104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.876121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.017 qpair failed and we were unable to recover it. 00:27:29.017 [2024-12-10 05:04:19.876206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.017 [2024-12-10 05:04:19.876225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.876366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.876383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.876477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.876495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.876591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.876609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.876744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.876762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.876917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.876934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.877108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.877125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.877212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.877231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.877372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.877445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.877589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.877633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.877823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.877859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.878045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.878064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.878178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.878197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.878428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.878450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.878614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.878631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.878786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.878804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.878950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.878967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.879068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.879183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.879291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.879516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.879680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.879840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.879983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.880950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.880967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.881905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.881922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.882069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.882087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.018 [2024-12-10 05:04:19.882228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.018 [2024-12-10 05:04:19.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.018 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.882334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.882354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.882513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.882531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.882722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.882739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.882898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.882915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.883811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.883828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.884895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.884913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.885854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.885869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.886044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.886274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.886431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.886536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.886685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.886838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.886993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.887223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.887428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.887517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.887703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.887876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.887969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.887985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.019 qpair failed and we were unable to recover it. 00:27:29.019 [2024-12-10 05:04:19.888137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.019 [2024-12-10 05:04:19.888156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.888394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.888411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.888499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.888517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.888668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.888685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.888833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.888851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.888937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.888954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.889105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.889122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.889287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.889306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.889447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.889464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.889617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.889635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.889803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.889821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.889908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.890080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.890098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.890249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.890277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.890467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.890485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.890564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.890581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.890726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.890744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.890907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.890924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.891973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.891990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.892219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.892238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.892379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.892397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.892602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.892771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.892789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.892942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.892960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.893109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.893127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.893275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.893293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.893449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.893467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.893542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.893559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.893711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.893729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.020 qpair failed and we were unable to recover it. 00:27:29.020 [2024-12-10 05:04:19.893883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.020 [2024-12-10 05:04:19.893901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.893989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.894150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.894339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.894452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.894675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.894775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.894970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.894987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.895923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.895940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.896895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.896914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.897967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.897984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.898952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.898968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.899179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.899197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.899292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.899310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.899381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.021 [2024-12-10 05:04:19.899396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.021 qpair failed and we were unable to recover it. 00:27:29.021 [2024-12-10 05:04:19.899484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.899501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.899655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.899674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.899743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.899758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.899826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.899842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.899989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.900078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.900165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.900347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.900599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.900769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.900942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.900960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.901969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.901986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.902899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.902917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.903981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.904152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.904177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.904264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.904284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.904495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.904512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.904620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.904637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.904715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.904731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.022 [2024-12-10 05:04:19.904809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.022 [2024-12-10 05:04:19.904826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.022 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.905931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.905950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.906899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.906916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.907014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.907031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.907193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.907212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.907301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.907318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.907508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.907525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.907788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.907807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.907894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.907911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.908848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.908865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.909964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.909981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.910069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.910179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.910200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.910294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.910310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.910394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.023 [2024-12-10 05:04:19.910411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.023 qpair failed and we were unable to recover it. 00:27:29.023 [2024-12-10 05:04:19.910499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.910516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.910658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.910676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.910768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.910786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.910994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.911956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.911972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.912938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.912955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.913945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.913965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.914108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.914126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.914235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.914253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.914458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.914478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.914561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.914578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.914715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.914733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.914910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.914928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.915021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.915038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 [2024-12-10 05:04:19.915128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.915146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 783999 Killed "${NVMF_APP[@]}" "$@" 00:27:29.024 [2024-12-10 05:04:19.915320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.024 [2024-12-10 05:04:19.915338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.024 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.915494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.915511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.915647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.915664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.915752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.915770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.915864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.915884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:29.025 [2024-12-10 05:04:19.916033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.916144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.916252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:29.025 [2024-12-10 05:04:19.916438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.916605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.025 [2024-12-10 05:04:19.916702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.916859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.916876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.025 [2024-12-10 05:04:19.917026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.025 [2024-12-10 05:04:19.917289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.917918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.917936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.918152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.918269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.918426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.918606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.918701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.918903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.918992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.919950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.919967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.920127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.920144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.920383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.920400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.920606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.025 [2024-12-10 05:04:19.920624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.025 qpair failed and we were unable to recover it. 00:27:29.025 [2024-12-10 05:04:19.920707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.920726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.920897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.920914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.921026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.921253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.921353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.921472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.921639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.921805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.921986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.922905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.922994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.923088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.923180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.923338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.923436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.923606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.923818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.923836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=784910 00:27:29.026 [2024-12-10 05:04:19.923992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.924011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.924149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.924175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 784910 00:27:29.026 [2024-12-10 05:04:19.924318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.924337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.924490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.924508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.924587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.924605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.924687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 784910 ']' 00:27:29.026 [2024-12-10 05:04:19.924704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.924913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.924987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b9 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.026 0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.925152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.925216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.026 [2024-12-10 05:04:19.925398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.925433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.026 [2024-12-10 05:04:19.925635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.925655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.925833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.925850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.026 [2024-12-10 05:04:19.926013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.926031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 05:04:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.026 [2024-12-10 05:04:19.926197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.026 [2024-12-10 05:04:19.926215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.026 qpair failed and we were unable to recover it. 00:27:29.026 [2024-12-10 05:04:19.926423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.926440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.926531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.926548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.926705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.926723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.926865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.926881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.926978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.926996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.927954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.927972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.928921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.928941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.929120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.929234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.929397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.929889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.929987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.930005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.930160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.930185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.930264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.930281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.930530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.930548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.930700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.930720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.930920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.930937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.931899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.931917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.932082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.932101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.932201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.932220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.932360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.027 [2024-12-10 05:04:19.932378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.027 qpair failed and we were unable to recover it. 00:27:29.027 [2024-12-10 05:04:19.932469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.932487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.932668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.932686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.932829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.932846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.932938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.932957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.933954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.933971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.934970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.934988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.935831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.935848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.936956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.936974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.937073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.937090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.937175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.937193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.937290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.937308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.937403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.937420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.937554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.028 [2024-12-10 05:04:19.937571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.028 qpair failed and we were unable to recover it. 00:27:29.028 [2024-12-10 05:04:19.937644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.937660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.937754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.937771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.937865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.938751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.938999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.939098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.939293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.939458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.939632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.939835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.939853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.940957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.940974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.941113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.941131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.941225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.941242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.941325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.029 [2024-12-10 05:04:19.941344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.029 qpair failed and we were unable to recover it. 00:27:29.029 [2024-12-10 05:04:19.941420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.941436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.941514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.941532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.941614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.941631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.941711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.941729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.941835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.941936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.941954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.942940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.942957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.943973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.943991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.944153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.944177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.944330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.944347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.944492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.944511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.944653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.944671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.944749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.944766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.944939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.944956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.945053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.945071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.945210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.945228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.945408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.945425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.945576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.945685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.945702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.945800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.945819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.946051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.946073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.946179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.946197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.946281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.946300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.946373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.030 [2024-12-10 05:04:19.946391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.030 qpair failed and we were unable to recover it. 00:27:29.030 [2024-12-10 05:04:19.946487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.946505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.946588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.946606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.946743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.946762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.947819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.947836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.948949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.948966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.949117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.949136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.949352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.949371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.949459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.949477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.949644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.949661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.949736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.949756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.949902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.949918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.950905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.951880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.951898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.952043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.031 [2024-12-10 05:04:19.952062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.031 qpair failed and we were unable to recover it. 00:27:29.031 [2024-12-10 05:04:19.952212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.952231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.952318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.952336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.952476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.952494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.952590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.952608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.952756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.952773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.952912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.952930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.953964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.953980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.954134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.954152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.954318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.954335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.954474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.954491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.954648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.954665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.954867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.954884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.954980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.954997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.955137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.955156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.955235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.955253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.955347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.955364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.955532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.955550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.955711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.955728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.955870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.955891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.956880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.956898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.957880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.957898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.958043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.958060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.958136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.032 [2024-12-10 05:04:19.958153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.032 qpair failed and we were unable to recover it. 00:27:29.032 [2024-12-10 05:04:19.958299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.958318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.958471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.958488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.958569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.958587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.958735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.958752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.958839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.958855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.958992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.959922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.959939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.960918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.960991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.961097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.961193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.961356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.961459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.961640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.961883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.962095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.962112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.962326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.962345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.962495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.962513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.962651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.962667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.962760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.962777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.962862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.962878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.963022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.963039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.033 [2024-12-10 05:04:19.963120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.033 [2024-12-10 05:04:19.963142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.033 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.963236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.963253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.963410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.963428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.963589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.963606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.963746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.963763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.964931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.964948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.965797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.965815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.966961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.966978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.967957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.967974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.968134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.968152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.968237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.968255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.968322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.968337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.968425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.968443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.034 qpair failed and we were unable to recover it. 00:27:29.034 [2024-12-10 05:04:19.968529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.034 [2024-12-10 05:04:19.968546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.968634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.968652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.968804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.968821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.968969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.968986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.969932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.969949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.970097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.970114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.970192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.970210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.970349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.970368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.970460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.970479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.970698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.970715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.970917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.970934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.971032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.971049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.971271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.971289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.971456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.971473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.971617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.971634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.971720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.971737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.971906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.971924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.972030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.972048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.972276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.972294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.972391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.972408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.972564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.972581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.972734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.972754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.972900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.972917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.035 [2024-12-10 05:04:19.973842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.035 [2024-12-10 05:04:19.973859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.035 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.973940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.973958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.974899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.974983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.975908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.975994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976197] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:29.036 [2024-12-10 05:04:19.976220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976237] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.036 [2024-12-10 05:04:19.976370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.976962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.976976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.977196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.977214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.977309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.977326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.977480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.977638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.977656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.977827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.977844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.978000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.978016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.978176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.978195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.978340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.978356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.978497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.978513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.978604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.978622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.036 qpair failed and we were unable to recover it. 00:27:29.036 [2024-12-10 05:04:19.978725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.036 [2024-12-10 05:04:19.978742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.978820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.978836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.978917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.978936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.979095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.979264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.979384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.979632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.979803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.979901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.979995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.980976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.980994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.981945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.981963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.982941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.982956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.983048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.983065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.983285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.983303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.983388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.983405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.983616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.983636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.983781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.983797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.983885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.983903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.984052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.984068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.037 [2024-12-10 05:04:19.984223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.037 [2024-12-10 05:04:19.984241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.037 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.984387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.984406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.984570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.984586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.984682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.984698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.984793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.984811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.984885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.984902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.985855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.985992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.986220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.986387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.986487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.986573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.986747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.986937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.987930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.987946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.988084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.988101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.988253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.988273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.988357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.988375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.988471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.988490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.038 [2024-12-10 05:04:19.988585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.038 [2024-12-10 05:04:19.988602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.038 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.988676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.988694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.988829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.988847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.988987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.989082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.989277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.989446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.989607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.989710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.989885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.989902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.990909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.990999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.991190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.991293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.991394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.991636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.991804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.991976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.991992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.992964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.992981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.993968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.993984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.994135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.994152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.994304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.994321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.994392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.994410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.994503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.994520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.039 qpair failed and we were unable to recover it. 00:27:29.039 [2024-12-10 05:04:19.994614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.039 [2024-12-10 05:04:19.994631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.994705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.994722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.994882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.994898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.995058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.995146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.995343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.995566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.995732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.995984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.996883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.996901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.997916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.998870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.998886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:19.999982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:19.999999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.040 qpair failed and we were unable to recover it. 00:27:29.040 [2024-12-10 05:04:20.000083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.040 [2024-12-10 05:04:20.000100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.000270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.000287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.000360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.000379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.000590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.000607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.000850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.000883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.000998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.001024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.001195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.001224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.001330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.001605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.001623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.001716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.001733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.001819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.001986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.002895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.002912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.003966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.003985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.004923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.004939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.005037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.005054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.005132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.005151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.005369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.005444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.005617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.005689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.005925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.041 [2024-12-10 05:04:20.005997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.041 qpair failed and we were unable to recover it. 00:27:29.041 [2024-12-10 05:04:20.006182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.006204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.006311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.006337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.006521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.006562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.006766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.006812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.007016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.007063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.009838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.009893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.010139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.010187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.010440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.010474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.010615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.010649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.010879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.010913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.011139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.011185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.011338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.011382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.011561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.011585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.011686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.011709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.011803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.011826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.011951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.011985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.012092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.012116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.012273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.012298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.012397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.012421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.012664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.012688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.012850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.012887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.013013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.013043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.013241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.013275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.013513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.013544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.013665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.013696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.013822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.013853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.014025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.014055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.014222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.014256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.014369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.014401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.014590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.014628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.014745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.014779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.014952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.014985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.015120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.015153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.015295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.015331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.015450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.015484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.015661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.015694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.015813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.042 [2024-12-10 05:04:20.015848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.042 qpair failed and we were unable to recover it. 00:27:29.042 [2024-12-10 05:04:20.016026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.016062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.016271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.016307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.016541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.016576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.016749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.016783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.016895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.016930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.017207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.017252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.017543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.017578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.017699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.017733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.017870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.017903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.018037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.018073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.018254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.018289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.018402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.018436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.018554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.018589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.018778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.018811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.018991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.019027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.019198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.019234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.019422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.019457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.019646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.019681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.019816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.019851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.019982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.020017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.020196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.020232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.020350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.020384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.020511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.020546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.020727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.020759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.020949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.020980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.021085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.021116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.021269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.021303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.021443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.021473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.021675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.021706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.021912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.022085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.022115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.022230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.022260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.022445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.022482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.022615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.022647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.022839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.022871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.022975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.023007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.023242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.023278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.023390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.023422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.023683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.023718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.023834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.023867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.023993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.024027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.024145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.024192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.043 [2024-12-10 05:04:20.024309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.043 [2024-12-10 05:04:20.024342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.043 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.024467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.024503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.024747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.024781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.024905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.024939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.025162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.025209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.025400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.025434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.025625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.025659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.025872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.025906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.026088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.026269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.026302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.026416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.026450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.026622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.026656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.026890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.026924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.027163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.027204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.027497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.027533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.027659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.027693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.027944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.027978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.028321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.028511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.028546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.028690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.028723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.028863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.028899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.029016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.029050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.029189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.029224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.029344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.029378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.029555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.029589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.029715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.029748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.029912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.030081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.030116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.030254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.030289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.030496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.030705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.030751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.030891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.030924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.031122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.031158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.031293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.031337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.031488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.031530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.031797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.031833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.031951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.031985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.032250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.032306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.032561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.032619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.032906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.032982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.033222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.033290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.033491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.033536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.033763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.044 [2024-12-10 05:04:20.033835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.044 qpair failed and we were unable to recover it. 00:27:29.044 [2024-12-10 05:04:20.034051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.034094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.034261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.034307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.034471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.034558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.034764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.034807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.035076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.035134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.035310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.035370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.035549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.035598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.035835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.035888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.036118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.036154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.036331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.036366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.036475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.036509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.036627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.036659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.036834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.036870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.037072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.037106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.037313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.037350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.037599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.037633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.037758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.037793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.037974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.038006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.038195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.038229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.038430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.038463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.038634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.038668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.038854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.038887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.039125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.039159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.039359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.039394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.039521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.039553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.039735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.039768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.039885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.039918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.040078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.040113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.040272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.040339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.040512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.040572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.040729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.040767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.041009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.041043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.041321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.041356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.041560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.041596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.041840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.041874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.042072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.042106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.042305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.042342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.042597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.042643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.042882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.042918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.043059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.043093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.045 [2024-12-10 05:04:20.043288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.045 [2024-12-10 05:04:20.043321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.045 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.043501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.043544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.043731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.043766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.043945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.043978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.044184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.044220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.044350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.044385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.044556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.044590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.044844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.044880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.045071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.045105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.045261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.045514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.045548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.045687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.045721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.045852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.045886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.046068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.046103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.046236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.046274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.046475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.046509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.046643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.046676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.046925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.046961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.047176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.047210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.047392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.047429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.047555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.047591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.047702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.047737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.047924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.047959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.048218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.048254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.048372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.048414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.048549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.048584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.048842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.048876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.049049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.049085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.049263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.049301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.049508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.049542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.049785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.049819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.049997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.050030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.050223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.050259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.050473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.050656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.050690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.050800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.050834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.051027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.046 [2024-12-10 05:04:20.051060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.046 qpair failed and we were unable to recover it. 00:27:29.046 [2024-12-10 05:04:20.051253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.051289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.051405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.051437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.051746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.051779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.052037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.052071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.052258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.052294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.052584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.052619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.052825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.052859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.052981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.053013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.053245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.053282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.053396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.053429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.053669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.053702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.053891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.053925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.054097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.054130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.054395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.054430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.054633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.054666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.054823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.054840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.055046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.055064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.055236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.055254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.055491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.055513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.055662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.055679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.055925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.055943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.056114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.056132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.056233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.056248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.056332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.056348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.056535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.056554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.056728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.056749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.056905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.056925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.057161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.057190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.057478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.057508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.058883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.047 [2024-12-10 05:04:20.061176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.061203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.061406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.061423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.061645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.061674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.061850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.061868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.061957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.061972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.047 qpair failed and we were unable to recover it. 00:27:29.047 [2024-12-10 05:04:20.062932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.047 [2024-12-10 05:04:20.062947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.063954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.063970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.064078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.064250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.064418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.064576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.064722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.064813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.064994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.065013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.065299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.065322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.065488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.065509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.065680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.065917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.065933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.066196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.066220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.066371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.066392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.066616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.066638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.066795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.066811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.066899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.066912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.068693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.068721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.068951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.068963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.069042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.069052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.069253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.069266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.069443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.069456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.069655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.069667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.069808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.069821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.070061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.070077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.070219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.070232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.070428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.070443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.070578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.070591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.070730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.070742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.070901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.070913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.071075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.071089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.071182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.071195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.071362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.071377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.071456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.048 [2024-12-10 05:04:20.071468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.048 qpair failed and we were unable to recover it. 00:27:29.048 [2024-12-10 05:04:20.071681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.071695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.071856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.071871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.072976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.072991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.073902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.073994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.074958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.074973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.075971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.075986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.076924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.076938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.077032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.077045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.077126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.049 [2024-12-10 05:04:20.077139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.049 qpair failed and we were unable to recover it. 00:27:29.049 [2024-12-10 05:04:20.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.077292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.077479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.077494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.077646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.077660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.077817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.077830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.077967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.077982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.078147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.078161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.078386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.078401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.078539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.078552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.078637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.078651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.078800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.078813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.078911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.078926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.079941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.079956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.080102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.080115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.080309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.080324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.080548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.080561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.080759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.080772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.080910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.080923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.081982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.081994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.082084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.082098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.082159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.082176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.082236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.082250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.050 [2024-12-10 05:04:20.082443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.050 [2024-12-10 05:04:20.082456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.050 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.082697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.082919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.082932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.082996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.083208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.083362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.083511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.083610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.083925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.083939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.084860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.084879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.085967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.085982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.086084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.086103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.086254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.086271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.086415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.086433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.086600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.086618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.086816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.087053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.087071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.087174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.087192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.087445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.087462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.087674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.087699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.087874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.087890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.051 [2024-12-10 05:04:20.088034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.051 [2024-12-10 05:04:20.088053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.051 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.088205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.088225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.088399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.088417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.088574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.088593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.088727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.088745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.088851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.088867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.089110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.089129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.089276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.089293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.089366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.089537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.089556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.089736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.089912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.089929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.090095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.090112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.090215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.090234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.090375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.090393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.090549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.090566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.090822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.090838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.090939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.090957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.091111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.091128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.091356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.091375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.091451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.091467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.091615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.091632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.091788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.091804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.091954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.091971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.092140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.092156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.092398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.092416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.092570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.092587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.092760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.092778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.092926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.092942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.093114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.093131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.093320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.093336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.093513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.093678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.329 [2024-12-10 05:04:20.093695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.329 qpair failed and we were unable to recover it. 00:27:29.329 [2024-12-10 05:04:20.093799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.093817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.093883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.093898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.093976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.093990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.094172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.094189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.094278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.094297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.094509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.094529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.094697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.094720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.094807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.094830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.095045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.095067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.095243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.095264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.095422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.095445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.095633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.095745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.095764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.095913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.095936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.096097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.096119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.096270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.096293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.096401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.096424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.096522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.096543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.096780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.096803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.097020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.097042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.097211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.097235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.097392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.097417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.097661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.097683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.097771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.097793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.098021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.098043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.098276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.098299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.098470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.098492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.098670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.098693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.098951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.098973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.099071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.099093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.099261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.099284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.099455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.099477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.099665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.099689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.099907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.099929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.100114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.100138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.100301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.100324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.100432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.100455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.100551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.100573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.330 [2024-12-10 05:04:20.100736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.330 [2024-12-10 05:04:20.100757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.330 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.100842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.100865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.101164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.101194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.101341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.101362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.101576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.101598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.101749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.101770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.101982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.102006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.102111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.102136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.102310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.102332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.102569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.102591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.102780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.102802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.102974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.102996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.103229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.103254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.103427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.103450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.103630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.103652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.103808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.103829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.104074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.104097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.104256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.104279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.104430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.104451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.104645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.104758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.104785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.104976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.105005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.105246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.105276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.105510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.105538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.105716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.105743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.105971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.105999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.106263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.106294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.106353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.331 [2024-12-10 05:04:20.106379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.331 [2024-12-10 05:04:20.106387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.331 [2024-12-10 05:04:20.106393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.331 [2024-12-10 05:04:20.106399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.331 [2024-12-10 05:04:20.106476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.106503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.106754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.106782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.107008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.107037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.107206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.107236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.107350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.107378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.107633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.107706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.331 qpair failed and we were unable to recover it. 00:27:29.331 [2024-12-10 05:04:20.107997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-10 05:04:20.107928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:29.332 [2024-12-10 05:04:20.108061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.108042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:29.332 [2024-12-10 05:04:20.108176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:29.332 [2024-12-10 05:04:20.108188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:29.332 [2024-12-10 05:04:20.108269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.108434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.108467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.108671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.108706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.108965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.108998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.109251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.109300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.109425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.109459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.109715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.109751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.109882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.110044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.110078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.110334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.110371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.110581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.110624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.110800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.110835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.110951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.110987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.111096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.111130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.111347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.111382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.111641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.111676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.111811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.111845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.112020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.112053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.112239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.112274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.112548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.112582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.112775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.112811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.113044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.113077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.113269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.113305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.113543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.113781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.113815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.113995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.114030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.114316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.114350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.114521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.114555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.114730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.114764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.114890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.114924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.115196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.115232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.115417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.115452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.115667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.115861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.115895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.116012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.116046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.116301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.116338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.116514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-10 05:04:20.116547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.332 qpair failed and we were unable to recover it. 00:27:29.332 [2024-12-10 05:04:20.116746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.116781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.117054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.117091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.117284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.117321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.117565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.117599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.117779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.117813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.118100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.118136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.118288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.118328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.118544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.118578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.118763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.118798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.118978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.119012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.119131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.119164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.119302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.119336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.119575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.119610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.119781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.119814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.120067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.120103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.120380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.120574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.120610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.120798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.120832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.120943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.120977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.121226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.121262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.121450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.121486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.121659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.121693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.121815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.121848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.122035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.122070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.122337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.122376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.122567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.122602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.122799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.122834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.123018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.123060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.123204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.123240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.123446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.123482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.123668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.333 [2024-12-10 05:04:20.123703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.333 qpair failed and we were unable to recover it. 00:27:29.333 [2024-12-10 05:04:20.123900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.123938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.124204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.124243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.124515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.124551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.124797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.124835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.125022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.125057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.125191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.125229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.125499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.125536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.125724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.125759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.125968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.126005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.126184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.126221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.126482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.126518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.126650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.126685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.126820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.126857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.127032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.127068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.127303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.127340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.127544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.127580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.127783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.127818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.128015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.128050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.128304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.128340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.128587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.128621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.128801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.128835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.129024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.129057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.129337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.129375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.129574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.129611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.129789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.129824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.130062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.130097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.130268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.130484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.130519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.130694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.130729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.130991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.131027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.131277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.131313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.131505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.131540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.131731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.131766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.131951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.131985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.132192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.132229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.132360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.132394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.132657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.132699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.132832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.132868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.334 [2024-12-10 05:04:20.133057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.334 [2024-12-10 05:04:20.133095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.334 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.133365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.133401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.133525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.133559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.133767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.133801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.133993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.134028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.134200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.134235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.134520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.134698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.134731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.134997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.135032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.135271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.135309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.135506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.135541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.135827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.135863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.136042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.136079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.136333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.136370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.136645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.136682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.136960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.136995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.137191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.137228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.137414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.137448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.137709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.137744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.137951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.137987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.138211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.138248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.138444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.138480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.138668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.138703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.138887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.138923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.139115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.139148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.139359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.139394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.139578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.139612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.139802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.139836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.140043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.140077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.140341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.140378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.140504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.140539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.140788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.140824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.141020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.141056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.141238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.141274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.141474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.141711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.142035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-12-10 05:04:20.142071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-12-10 05:04:20.142283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.142316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.142512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.142553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.142726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.142759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.143000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.143033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.143229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.143265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.143402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.143435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.143639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.143673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.143887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.143921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.144100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.144134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.144472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.144542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.144773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.144838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.145061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.145095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.145214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.145249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.145489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.145522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.145656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.145691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.145898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.145933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.146145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.146187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.146478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.146513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.146684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.146719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.146968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.147000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.147185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.147220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.147490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.147525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.147808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.147842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.148105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.148138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.148429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.148464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.148649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.148682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.148947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.148982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.149177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.149213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.149336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.149375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.149514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.149546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.149741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.149775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.150022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.150055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.150238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.150273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.150522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.150556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.150733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.150768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.151031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.151066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.151262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.151298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.151489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.151522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-12-10 05:04:20.151775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-12-10 05:04:20.151808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.152046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.152082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.152284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.152319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.152590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.152635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.152901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.152934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.153192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.153228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.153495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.153529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.153769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.153802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.154081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.154115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.154394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.154429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.154724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.154758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.154882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.154915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.155183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.155219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.155485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.155521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.155788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.155822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.155960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.155995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.156198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.156233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.156464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.156500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.156638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.156672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.156852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.156886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.157069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.157102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.157363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.157399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.157636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.157669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.157802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.157838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.158026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.158062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.158278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.158456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.158489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.158754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.158788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.158992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.159026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.159147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.159190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e8000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.159340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.159393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.159677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.159712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.159907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.159941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.160145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.160195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.160317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.160352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.160600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.160635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.160895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.160929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.161101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.161135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.161416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.161463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-12-10 05:04:20.161726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-12-10 05:04:20.161760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.161933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.161966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.162196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.162233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.162472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.162507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.162751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.162785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.162935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.162971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.163176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.163212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.163389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.163425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.163632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.163665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.163858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.163892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.164123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.164156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.164285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.164320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.164556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.164590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.164782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.164815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.165077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.165111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.165359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.165396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.165592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.165626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.165739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.165774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.165956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.165991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.166179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.166214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.166482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.166516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.166789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.166822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.167008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.167042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.167182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.167216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.167391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.167425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.167618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.167653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.167924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.167958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.168208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.168243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.168504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.168538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.168708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.168742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.168932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.168966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-12-10 05:04:20.169180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-12-10 05:04:20.169221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.169411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.169445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.169636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.169669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.169901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.169934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.170177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.170214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.170447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.170480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.170746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.170779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.170989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.171024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.171244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.171280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.171506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.171541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.171807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.171840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.172029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.172062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.172303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.172338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.172604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.172638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.172881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.172917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.173184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.173218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.173458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.173493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.173677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.173711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.173832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.173866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.174103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.174138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.174324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.174360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.174600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.174633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.174824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.174858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.175122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.175156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.175447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.175482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.175690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.175725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.175908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.175943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.176209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.176246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.176419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.176453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.176627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.176663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.176858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.176891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.177107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.177141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.177439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.177475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.177662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.177696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.177911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.177944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.178196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.178231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.178355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.178390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.178653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.178686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-12-10 05:04:20.178882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-12-10 05:04:20.178916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.179155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.179197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.179399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.179439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.179626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.179659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.179836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.179869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.180137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.180181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.180464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.180498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.180617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.180828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.180861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.181067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.181100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.181228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.181263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.181431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.181464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.181641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.181675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.181949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.181982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.182185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.182220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.182348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.182381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.182564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.182598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.182811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.182844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.183040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.183073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.183199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.183233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.183360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.183393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.183529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.183563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.183814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.183847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.184085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.184118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.184235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.184270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.184459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.184492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.184614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.184648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.184765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.184798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.184991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.185024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.185180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.185216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.185383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.185416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.185596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.185629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.185818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.185852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.186112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.186146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.186349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.186384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.186552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.186587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.186695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.186729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.186838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.186871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.186980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-12-10 05:04:20.187014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-12-10 05:04:20.187242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.187277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.187469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.187502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.187738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.187772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.187966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.188007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.188275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.188310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.188589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.188623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.188818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.188852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.189120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.189152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.189439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.189473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.189727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.189761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.190022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.190055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.190358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.190393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.190646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.190680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.190855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.190888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.191067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.191099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.191292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.191328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.191594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.191627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.191900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.191934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.192191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.192225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.192398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.192432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.192684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.192716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.193005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.193038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.193211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.193247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.193423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.193456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.193661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.193694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.193875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.193909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.194190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.194225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.194511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.194545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.194831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.194864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.195128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.195162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.195413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.195447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.195698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.195732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.196021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.196055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.196328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.196363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.196508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.196542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.196724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-12-10 05:04:20.196756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-12-10 05:04:20.197019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.197053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.197181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.197217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.197412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.197445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.197637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.197671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.197847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.197879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.198052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.198085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.198283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.198318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.198503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.198543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.198729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.198762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.198950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.198983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.199177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.199212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.199446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.199480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.199675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.199709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.199970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.200003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.200240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.200274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.200485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.200518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.200704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.200993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.201025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.201196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.342 [2024-12-10 05:04:20.201231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.201425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.201458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:29.342 [2024-12-10 05:04:20.201697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.201730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:29.342 [2024-12-10 05:04:20.201955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.201988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.202112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.202145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.342 [2024-12-10 05:04:20.202393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.202428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.342 [2024-12-10 05:04:20.202677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.202710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.202957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.202990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.203257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.203292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.203578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.203611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.203876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.203909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.204188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.204224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.204406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.204439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.204629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.204662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.204847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.204881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-12-10 05:04:20.205119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-12-10 05:04:20.205152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.205360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.205398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.205581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.205615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.205851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.205886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.206079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.206113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.206312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.206347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.206533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.206570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.206775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.206808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.207014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.207047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.207237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.207272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.207555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.207589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.207707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.207740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.207855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.207894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.208105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.208138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.208384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.208419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.208590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.208623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.208745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.208778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.209039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.209072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.209365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.209400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.209634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.209667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.209854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.209888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.210124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.210158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.210426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.210461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.210590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.210624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.210806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.210839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.210978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.211011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.211213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.211249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.211534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.211568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.211793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.211827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.212017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.212051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.212250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.212285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.212417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.212453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.212665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.212698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.212984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.213174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-12-10 05:04:20.213208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-12-10 05:04:20.213324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.213358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.213647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.213680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.213939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.213972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.214110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.214145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.214308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.214342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.214451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.214484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.214666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.214699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.214956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.214990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.215161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.215209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.215390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.215424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.215627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.215660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.216000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.216033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.216177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.216212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.216467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.216500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.216738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.216772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.216879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.216915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.217127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.217160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.217345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.217384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.217575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.217609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.217861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.217894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.218092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.218125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.218317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.218351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.218527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.218560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.218851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.218886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.219148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.219209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.219425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.219458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.219587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.219620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.219890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.219923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.220179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.220215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.220526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.220560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.220759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.220793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.220988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.221023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.221279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.221314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.221488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.221523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.221693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.221726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.221851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.221884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.222061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.222095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.222357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.222391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.222581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-12-10 05:04:20.222615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-12-10 05:04:20.222787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.222820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.223004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.223037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.223298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.223334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.223624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.223657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.223945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.223978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.224179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.224214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.224352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.224386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.224535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.224568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.224789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.224919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.224953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.225209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.225243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.225447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.225480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.225791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.225825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.226070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.226102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.226363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.226398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.226507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.226538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.226846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.226879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.227118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.227152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.227362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.227403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.227603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.227636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.227986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.228022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.228206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.228243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.228450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.228486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.228658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.228691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.228956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.228990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.229240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.229274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.229471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.229505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.229695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.229729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.229995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.230027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.230208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.230242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.230369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.230403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.230674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.230708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.230896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.230931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.231133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.231190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.231428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.231462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.231589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.231622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.231766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.231799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.232053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.232085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.232287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-12-10 05:04:20.232322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-12-10 05:04:20.232513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.232547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.232725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.232758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.232945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.233200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.233235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.233376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.233410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.233522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.233556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.233825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.233859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.234047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.234080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.234280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.234314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.234528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.234561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.234743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.234778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.234975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.235009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.235209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.235245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.235377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.235410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.235592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.235626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.235808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.235841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.236014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.236047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.236255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.236291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.236528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.236561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.236748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.236787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.237023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.237057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.237301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.237336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.237470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.237502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.237690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.238010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.238043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.238215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.238249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.238379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.238414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.346 [2024-12-10 05:04:20.238584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.238619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.238814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.238848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:29.346 [2024-12-10 05:04:20.239133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.239185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.346 [2024-12-10 05:04:20.239459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.239495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.239689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.239722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.239866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.239898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.240136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.240373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.240408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.240593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.240626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.240869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-12-10 05:04:20.240902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-12-10 05:04:20.241073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.241106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.241280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.241314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.241574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.241608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.241780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.241813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.242081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.242114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.242386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.242421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.242546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.242579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.242841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.242885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.243064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.243098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.243295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.243331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.243522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.243556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.243811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.243845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.244019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.244053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.244165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.244208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.244420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.244452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.244645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.244678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.244876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.244911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.245178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.245214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.245392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.245426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.245563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.245597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.245789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.245831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.246122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.246156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.246408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.246442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.246643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.246676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.246798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.246830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.247003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.247035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.247294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.247329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.247596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.247630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.247937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.247970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.248157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.248200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.248459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.248493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.248682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.248715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.248884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.248918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.249164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.249207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-12-10 05:04:20.249479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-12-10 05:04:20.249512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.249755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.249788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.249977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.250011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.250302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.250337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.250545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.250579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.250820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.250854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.251090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.251123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.251322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.251357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.251594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.251628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.251882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.251915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.252092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.252125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.252345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.252380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.252572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.252605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.252862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.252916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.253111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.253146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.253405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.253440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.253615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.253649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.253858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.253891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.254079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.254111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.254369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.254404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.254698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.254732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.254991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.255024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.255262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.255296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.255534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.255569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.255774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.255807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.255982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.256016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.256216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.256251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.256452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.256486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.256601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.256635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.256960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.256993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.257191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.257226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.257469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.257504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.257746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.257779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.258076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.258109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.258370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.258404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.258606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.258639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.258904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.258938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.259190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.259226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.259418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.259451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.259643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-12-10 05:04:20.259676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-12-10 05:04:20.259888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.259928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.260176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.260465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.260499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.260739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.260773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.260959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.260992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.261118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.261151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.261420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.261456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.261700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.261733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.261997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.262032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.262227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.262263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.262470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.262503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.262674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.262708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.262888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.262922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.263062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.263096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.263377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.263414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.263621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.263655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.263930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.263965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.264240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.264276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.264399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.264433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.264604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.264637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.264818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.264853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.265043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.265076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.265289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.265500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.265535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.265726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.265759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.266038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.266073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.266343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.266379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.266657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.266696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.266914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.266948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.267160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.267202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.267381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.267414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.267670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.267704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.267911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.267946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.268134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.268184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.268385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.268420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.268637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.268671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.268889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.268922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.269162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.269206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.269512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-12-10 05:04:20.269546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-12-10 05:04:20.269836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.269870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.270084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.270386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.270421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.270648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.270682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.270981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.271014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.271272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.271306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.271603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.271636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 Malloc0 00:27:29.350 [2024-12-10 05:04:20.271902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.271936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.272142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.272182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.272354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.272388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.272627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.350 [2024-12-10 05:04:20.272661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.272858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.272891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:29.350 [2024-12-10 05:04:20.273159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.273202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.350 [2024-12-10 05:04:20.273459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.273493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12521a0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.350 [2024-12-10 05:04:20.273638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.273682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.273893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.274189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.274222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.274396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.274430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.274695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.274729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.274918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.274950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.275151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.275194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.275387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.275421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.275611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.275644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.275882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.275915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.276153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.276198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.276382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.276415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.276635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.276667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.276864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.276898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.277136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.277178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.277376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.277603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.277637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.277936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.277969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.278146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.278190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.278377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.278410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.278547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.278580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.278716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.278749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.278998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.279030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-12-10 05:04:20.279306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-12-10 05:04:20.279342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.279412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.351 [2024-12-10 05:04:20.279471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.279503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.279737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.279769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.279953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.279986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.280179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.280213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.280389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.280423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.280612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.280645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.280906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.280939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.281200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.281234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.281425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.281459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.281657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.281691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.281954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.281987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.282233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.282267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.282522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.282556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.282831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.282864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.283128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.283161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.283440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.283480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.283673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.283706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.283958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.283991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.284207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.284243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.284423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.284456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.284668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.284701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.285011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.285045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.285219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.285254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.351 [2024-12-10 05:04:20.285516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.285550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.285748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.351 [2024-12-10 05:04:20.285783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.351 [2024-12-10 05:04:20.286046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.286079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.286366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.286401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.286671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.286705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.286983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.287017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.287193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.287228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.287404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.287437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.287646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.287680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-12-10 05:04:20.287865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-12-10 05:04:20.287898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.288084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.288117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.288320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.288356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.288465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.288498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.288686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.288719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.288984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.289018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.289301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.289336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.289606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58dc000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.289929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.289968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.290209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.290244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.290529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.290563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.290687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.290721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.290847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.290880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.291147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.291192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.291453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.291486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.291693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.291727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.291974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.292007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.292197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.292232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.292471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.292504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.292790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.292824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.293008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.293042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.352 [2024-12-10 05:04:20.293302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.293338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:29.352 [2024-12-10 05:04:20.293648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.293682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.352 [2024-12-10 05:04:20.293927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.293960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.294066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.352 [2024-12-10 05:04:20.294100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.294389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.294424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.294604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.294637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.294817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.294850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.295023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.295056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.295344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.295379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.295645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.295678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.295854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.295887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.296154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.296213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.296349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.296383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-12-10 05:04:20.296619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-12-10 05:04:20.296654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.296823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.296856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.297129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.297163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.297362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.297396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.297564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.297596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.297833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.297866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.298049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.298082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.298266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.298301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.298553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.298586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.298757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.298790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.299078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.299111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.299244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.299278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.299447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.299487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.299661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.299694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.299976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.300009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.300149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.300190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.300315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.300347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.300469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.300503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.300633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.300666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.300929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.300963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.353 [2024-12-10 05:04:20.301252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.301287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.353 [2024-12-10 05:04:20.301478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.301512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.353 [2024-12-10 05:04:20.301798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.301832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.353 [2024-12-10 05:04:20.302101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.302135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.302317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.302352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.302612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.302645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.302933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.302966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.303186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.303222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.303396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.303429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.303618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.303651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.303849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.303882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.304164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.304216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.304469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-12-10 05:04:20.304504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f58e0000b90 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-12-10 05:04:20.304570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.353 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.353 [2024-12-10 05:04:20.310108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.310246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.310293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.310318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.310348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.310401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.354 05:04:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 784238 00:27:29.354 [2024-12-10 05:04:20.319969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.320052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.320080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.320094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.320108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.320142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.330043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.330171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.330192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.330202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.330211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.330234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.340008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.340087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.340100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.340107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.340113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.340130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.349963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.350057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.350071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.350078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.350087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.350103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.359988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.360041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.360055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.360062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.360069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.360084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.370001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.370055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.370070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.370077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.370084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.370099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.380056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.380124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.380137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.380144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.380150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.380164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.390076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.390131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.390144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.390151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.390158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.390178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.400132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.400194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.400208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.400215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.400222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.400237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.410180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.410284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.410297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.410304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.410311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.410327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.420229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.420304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.420318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.420327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.420336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.420355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-12-10 05:04:20.430178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.354 [2024-12-10 05:04:20.430236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.354 [2024-12-10 05:04:20.430249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.354 [2024-12-10 05:04:20.430256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.354 [2024-12-10 05:04:20.430263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.354 [2024-12-10 05:04:20.430278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-12-10 05:04:20.440130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.355 [2024-12-10 05:04:20.440188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.355 [2024-12-10 05:04:20.440206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.355 [2024-12-10 05:04:20.440215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.355 [2024-12-10 05:04:20.440221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.355 [2024-12-10 05:04:20.440237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.450269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.450353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.450367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.450374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.450380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.450395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.460192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.460259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.460273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.460281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.460288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.460303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.470288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.470347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.470360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.470367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.470374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.470388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.480324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.480390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.480404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.480411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.480420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.480436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.490355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.490407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.490420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.490427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.490433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.490448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.500400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.500458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.500471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.500478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.500485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.500501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.510423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.510478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.510491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.510498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.510504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.510519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.520448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.520502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.520515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.520521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.520528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.520542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.530491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.530547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.530561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.530568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.530574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.530589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.540440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.540523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.540537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.540545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.540550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.540566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.550591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.550699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.550712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.550720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.550726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.550741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.560570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.560626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.560639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.560646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.560652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.616 [2024-12-10 05:04:20.560668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.616 qpair failed and we were unable to recover it. 00:27:29.616 [2024-12-10 05:04:20.570584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.616 [2024-12-10 05:04:20.570639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.616 [2024-12-10 05:04:20.570653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.616 [2024-12-10 05:04:20.570660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.616 [2024-12-10 05:04:20.570666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.570681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.580668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.580774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.580787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.580795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.580801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.580817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.590639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.590691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.590704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.590711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.590717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.590733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.600664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.600713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.600726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.600733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.600739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.600754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.610754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.610859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.610872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.610885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.610892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.610907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.620736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.620794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.620806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.620813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.620820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.620834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.630754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.630811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.630825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.630832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.630838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.630853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.640790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.640846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.640859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.640866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.640872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.640887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.650849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.650903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.650917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.650924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.650930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.650948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.660853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.660909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.660922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.660929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.660935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.660950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.670882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.670935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.670947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.670954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.670960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.670975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.680843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.680896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.680909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.680915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.680922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.680938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.690918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.690996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.691010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.691017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.691023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.691038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.700982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.617 [2024-12-10 05:04:20.701045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.617 [2024-12-10 05:04:20.701059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.617 [2024-12-10 05:04:20.701066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.617 [2024-12-10 05:04:20.701073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.617 [2024-12-10 05:04:20.701088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.617 qpair failed and we were unable to recover it. 00:27:29.617 [2024-12-10 05:04:20.710998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.618 [2024-12-10 05:04:20.711054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.618 [2024-12-10 05:04:20.711067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.618 [2024-12-10 05:04:20.711075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.618 [2024-12-10 05:04:20.711081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.618 [2024-12-10 05:04:20.711097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.618 qpair failed and we were unable to recover it. 00:27:29.618 [2024-12-10 05:04:20.721094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.618 [2024-12-10 05:04:20.721177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.618 [2024-12-10 05:04:20.721192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.618 [2024-12-10 05:04:20.721199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.618 [2024-12-10 05:04:20.721205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.618 [2024-12-10 05:04:20.721220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.618 qpair failed and we were unable to recover it. 00:27:29.618 [2024-12-10 05:04:20.731044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.618 [2024-12-10 05:04:20.731095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.618 [2024-12-10 05:04:20.731107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.618 [2024-12-10 05:04:20.731114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.618 [2024-12-10 05:04:20.731120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.618 [2024-12-10 05:04:20.731136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.618 qpair failed and we were unable to recover it. 00:27:29.618 [2024-12-10 05:04:20.741086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.618 [2024-12-10 05:04:20.741141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.618 [2024-12-10 05:04:20.741158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.618 [2024-12-10 05:04:20.741164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.618 [2024-12-10 05:04:20.741175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.618 [2024-12-10 05:04:20.741190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.618 qpair failed and we were unable to recover it. 00:27:29.878 [2024-12-10 05:04:20.751137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.878 [2024-12-10 05:04:20.751198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.878 [2024-12-10 05:04:20.751211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.878 [2024-12-10 05:04:20.751218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.878 [2024-12-10 05:04:20.751224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.878 [2024-12-10 05:04:20.751240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.878 qpair failed and we were unable to recover it. 00:27:29.878 [2024-12-10 05:04:20.761147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.878 [2024-12-10 05:04:20.761206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.878 [2024-12-10 05:04:20.761218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.878 [2024-12-10 05:04:20.761225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.878 [2024-12-10 05:04:20.761231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.878 [2024-12-10 05:04:20.761246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.878 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.771158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.771212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.771225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.771232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.771240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.771255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.781224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.781280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.781294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.781301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.781307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.781326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.791249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.791317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.791330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.791338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.791344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.791359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.801248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.801300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.801314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.801321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.801327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.801343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.811319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.811397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.811411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.811418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.811424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.811439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.821329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.821384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.821398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.821404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.821411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.821425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.831277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.831361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.831375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.831382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.831388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.831403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.841397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.841451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.841465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.841471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.841478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.841494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.851463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.851519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.851533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.851541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.851548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.851563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.861460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.861520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.861533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.861542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.861549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.861565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.871504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.871591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.871608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.871615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.871621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.871637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.881480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.881536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.881548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.881555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.881561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.881576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.891539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.879 [2024-12-10 05:04:20.891602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.879 [2024-12-10 05:04:20.891618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.879 [2024-12-10 05:04:20.891625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.879 [2024-12-10 05:04:20.891631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.879 [2024-12-10 05:04:20.891647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.879 qpair failed and we were unable to recover it. 00:27:29.879 [2024-12-10 05:04:20.901566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.901629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.901645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.901652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.901659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.901675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.911512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.911566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.911579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.911586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.911595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.911610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.921630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.921696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.921709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.921716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.921722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.921737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.931631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.931687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.931700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.931707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.931714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.931729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.941672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.941729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.941742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.941749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.941756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.941771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.951679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.951735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.951748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.951755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.951761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.951776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.961721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.961784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.961796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.961805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.961812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.961830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.971756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.971812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.971825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.971832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.971840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.971856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.981751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.981840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.981855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.981863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.981869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.981884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:20.991803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:20.991858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:20.991872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:20.991879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:20.991885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:20.991900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:29.880 [2024-12-10 05:04:21.001803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.880 [2024-12-10 05:04:21.001868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.880 [2024-12-10 05:04:21.001884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.880 [2024-12-10 05:04:21.001891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.880 [2024-12-10 05:04:21.001897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:29.880 [2024-12-10 05:04:21.001912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.880 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.011891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.011967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.011980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.011987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.141 [2024-12-10 05:04:21.011993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.141 [2024-12-10 05:04:21.012008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.141 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.021898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.021956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.021971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.021978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.141 [2024-12-10 05:04:21.021984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.141 [2024-12-10 05:04:21.021999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.141 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.031840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.031896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.031910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.031917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.141 [2024-12-10 05:04:21.031923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.141 [2024-12-10 05:04:21.031939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.141 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.041989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.042045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.042058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.042066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.141 [2024-12-10 05:04:21.042076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.141 [2024-12-10 05:04:21.042092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.141 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.051964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.052049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.052063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.052070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.141 [2024-12-10 05:04:21.052075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.141 [2024-12-10 05:04:21.052090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.141 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.061994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.062051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.062064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.062071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.141 [2024-12-10 05:04:21.062077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.141 [2024-12-10 05:04:21.062093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.141 qpair failed and we were unable to recover it. 00:27:30.141 [2024-12-10 05:04:21.072021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.141 [2024-12-10 05:04:21.072113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.141 [2024-12-10 05:04:21.072142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.141 [2024-12-10 05:04:21.072150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.072156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.072181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.082002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.082051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.082066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.082073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.082079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.082096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.092080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.092134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.092148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.092155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.092161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.092182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.102105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.102163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.102183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.102191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.102198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.102214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.112073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.112128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.112141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.112148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.112155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.112174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.122331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.122396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.122409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.122416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.122423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.122438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.132231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.132291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.132304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.132310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.132316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.132332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.142267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.142321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.142334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.142341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.142347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.142362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.152254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.152346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.152360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.152366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.152372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.152387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.162209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.162267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.162282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.162289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.162297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.162312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.172252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.172306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.172319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.172333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.172339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.172354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.182321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.182382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.182397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.182405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.182412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.182427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.192374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.142 [2024-12-10 05:04:21.192444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.142 [2024-12-10 05:04:21.192457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.142 [2024-12-10 05:04:21.192464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.142 [2024-12-10 05:04:21.192470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.142 [2024-12-10 05:04:21.192485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.142 qpair failed and we were unable to recover it. 00:27:30.142 [2024-12-10 05:04:21.202403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.202467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.202479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.202486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.202493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.202507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.143 [2024-12-10 05:04:21.212429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.212477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.212492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.212499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.212506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.212525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.143 [2024-12-10 05:04:21.222470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.222547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.222560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.222567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.222573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.222588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.143 [2024-12-10 05:04:21.232506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.232563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.232576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.232583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.232589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.232605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.143 [2024-12-10 05:04:21.242455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.242505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.242518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.242525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.242531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.242546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.143 [2024-12-10 05:04:21.252461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.252534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.252548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.252555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.252561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.252577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.143 [2024-12-10 05:04:21.262581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.143 [2024-12-10 05:04:21.262658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.143 [2024-12-10 05:04:21.262673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.143 [2024-12-10 05:04:21.262680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.143 [2024-12-10 05:04:21.262687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.143 [2024-12-10 05:04:21.262702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.143 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.272547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.272643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.272657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.272664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.272671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.272687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.282636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.282694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.282707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.282714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.282721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.282737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.292627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.292676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.292689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.292696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.292702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.292717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.302642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.302697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.302715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.302722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.302728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.302743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.312721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.312779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.312792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.312799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.312805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.312821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.322804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.322865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.322880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.322887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.322893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.322909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.332743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.332794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.332806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.332813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.332819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.332835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.342854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.342909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.342921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.342928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.342935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.342953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.352838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.352894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.352908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.352915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.352921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.352937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.362875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.362928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.362941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.362948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.362954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.362969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.404 [2024-12-10 05:04:21.372920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.404 [2024-12-10 05:04:21.372968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.404 [2024-12-10 05:04:21.372981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.404 [2024-12-10 05:04:21.372988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.404 [2024-12-10 05:04:21.372994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.404 [2024-12-10 05:04:21.373009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.404 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.382923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.382979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.382993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.382999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.383006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.383021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.392941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.392991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.393005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.393012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.393019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.393034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.402981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.403038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.403051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.403058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.403064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.403079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.412988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.413044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.413057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.413064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.413070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.413085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.423037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.423093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.423106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.423113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.423119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.423135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.433109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.433178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.433194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.433201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.433207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.433223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.443083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.443131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.443143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.443150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.443156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.443175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.453134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.453207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.453221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.453228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.453234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.453248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.463162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.463231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.463244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.463251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.463257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.463272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.473174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.473231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.473244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.473251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.473261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.473276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.483211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.483269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.405 [2024-12-10 05:04:21.483281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.405 [2024-12-10 05:04:21.483289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.405 [2024-12-10 05:04:21.483295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.405 [2024-12-10 05:04:21.483310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.405 qpair failed and we were unable to recover it. 00:27:30.405 [2024-12-10 05:04:21.493256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.405 [2024-12-10 05:04:21.493344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.406 [2024-12-10 05:04:21.493357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.406 [2024-12-10 05:04:21.493363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.406 [2024-12-10 05:04:21.493370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.406 [2024-12-10 05:04:21.493384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-12-10 05:04:21.503268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.406 [2024-12-10 05:04:21.503326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.406 [2024-12-10 05:04:21.503339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.406 [2024-12-10 05:04:21.503346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.406 [2024-12-10 05:04:21.503352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.406 [2024-12-10 05:04:21.503368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-12-10 05:04:21.513288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.406 [2024-12-10 05:04:21.513341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.406 [2024-12-10 05:04:21.513353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.406 [2024-12-10 05:04:21.513360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.406 [2024-12-10 05:04:21.513367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.406 [2024-12-10 05:04:21.513382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-12-10 05:04:21.523349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.406 [2024-12-10 05:04:21.523404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.406 [2024-12-10 05:04:21.523417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.406 [2024-12-10 05:04:21.523423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.406 [2024-12-10 05:04:21.523430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.406 [2024-12-10 05:04:21.523444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.406 [2024-12-10 05:04:21.533295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.406 [2024-12-10 05:04:21.533343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.406 [2024-12-10 05:04:21.533356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.406 [2024-12-10 05:04:21.533363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.406 [2024-12-10 05:04:21.533369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.406 [2024-12-10 05:04:21.533384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.406 qpair failed and we were unable to recover it. 00:27:30.666 [2024-12-10 05:04:21.543398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.666 [2024-12-10 05:04:21.543456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.666 [2024-12-10 05:04:21.543471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.666 [2024-12-10 05:04:21.543479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.666 [2024-12-10 05:04:21.543485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.666 [2024-12-10 05:04:21.543501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-12-10 05:04:21.553432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.666 [2024-12-10 05:04:21.553530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.666 [2024-12-10 05:04:21.553543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.666 [2024-12-10 05:04:21.553550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.666 [2024-12-10 05:04:21.553556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.553572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.563435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.563502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.563519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.563526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.563532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.563547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.573460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.573511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.573523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.573530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.573536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.573552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.583505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.583563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.583577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.583584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.583590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.583605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.593522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.593578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.593592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.593599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.593605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.593621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.603589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.603646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.603659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.603669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.603675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.603690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.613576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.613629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.613641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.613648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.613654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.613669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.623614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.623708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.623722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.623729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.623735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.623751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.633630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.633683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.633696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.633702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.633708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.633724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.643667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.643757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.643770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.643777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.643783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.643798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.653728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.653790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.653803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.653810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.653816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.653830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.663665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.663730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.663743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.663750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.663756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.663770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.673701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.673767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.673781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.673787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.673794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.673808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.683793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.683856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.683869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.683876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.683882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.667 [2024-12-10 05:04:21.683897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-12-10 05:04:21.693807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.667 [2024-12-10 05:04:21.693862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.667 [2024-12-10 05:04:21.693875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.667 [2024-12-10 05:04:21.693882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.667 [2024-12-10 05:04:21.693888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.693903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.703886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.703994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.704006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.704013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.704019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.704034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.713865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.713918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.713931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.713938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.713944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.713959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.723890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.723946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.723959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.723965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.723972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.723987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.733919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.734014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.734027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.734037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.734043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.734058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.743962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.744030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.744044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.744051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.744057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.744072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.753973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.754025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.754038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.754045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.754051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.754067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.764005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.764060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.764074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.764081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.764088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.764103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.774043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.774092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.774105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.774112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.774119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.774138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.784060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.784117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.784130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.784137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.784144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.784158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-12-10 05:04:21.794009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.668 [2024-12-10 05:04:21.794066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.668 [2024-12-10 05:04:21.794080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.668 [2024-12-10 05:04:21.794087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.668 [2024-12-10 05:04:21.794093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.668 [2024-12-10 05:04:21.794108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.804096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.804150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.804164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.804176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.804182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.804198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.814122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.814178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.814191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.814199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.814206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.814222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.824146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.824216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.824229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.824236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.824242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.824257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.834195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.834248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.834261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.834268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.834274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.834290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.844216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.844282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.844296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.844303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.844309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.844325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.854245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.854301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.854314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.854321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.854327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.854342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.864293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.864351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.864368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.864375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.864381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.864397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.874304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.874363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.874377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.874384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.874391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.874407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.884313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.884377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.884390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.884397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.884403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.884418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.894368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.894441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.894454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.894461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.894467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.894482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.904340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.904433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.904447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.904456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.904465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.904481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.914470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.929 [2024-12-10 05:04:21.914536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.929 [2024-12-10 05:04:21.914551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.929 [2024-12-10 05:04:21.914559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.929 [2024-12-10 05:04:21.914565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.929 [2024-12-10 05:04:21.914580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.929 qpair failed and we were unable to recover it. 00:27:30.929 [2024-12-10 05:04:21.924374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.924430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.924445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.924452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.924459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.924475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.934401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.934453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.934469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.934477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.934485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.934502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.944518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.944575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.944588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.944595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.944601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.944616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.954535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.954599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.954612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.954618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.954625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.954640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.964574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.964651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.964664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.964671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.964677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.964691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.974590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.974657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.974670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.974677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.974683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.974698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.984617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.984695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.984709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.984715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.984721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.984736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:21.994646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:21.994696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:21.994712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:21.994719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:21.994725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:21.994740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:22.004677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:22.004776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:22.004789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:22.004796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:22.004802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:22.004817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:22.014705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:22.014756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:22.014769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:22.014776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:22.014782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:22.014797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:22.024719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:22.024826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:22.024839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:22.024846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:22.024852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:22.024866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:22.034760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:22.034812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:22.034825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:22.034832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:22.034841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:22.034856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:22.044788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:22.044845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:22.044858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.930 [2024-12-10 05:04:22.044865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.930 [2024-12-10 05:04:22.044871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.930 [2024-12-10 05:04:22.044886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.930 qpair failed and we were unable to recover it. 00:27:30.930 [2024-12-10 05:04:22.054800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.930 [2024-12-10 05:04:22.054855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.930 [2024-12-10 05:04:22.054867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.931 [2024-12-10 05:04:22.054874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.931 [2024-12-10 05:04:22.054881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:30.931 [2024-12-10 05:04:22.054895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.931 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.064845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.064934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.064947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.064954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.064960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.064975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.074907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.074969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.074982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.074989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.074996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.075011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.084917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.084982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.084995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.085003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.085009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.085024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.094892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.094948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.094961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.094968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.094974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.094989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.104951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.105002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.105016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.105023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.105029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.105044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.114976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.115029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.115043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.115050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.115056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.115071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.125059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.125114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.125131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.125138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.125144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.125159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.135068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.135125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.191 [2024-12-10 05:04:22.135138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.191 [2024-12-10 05:04:22.135146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.191 [2024-12-10 05:04:22.135153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.191 [2024-12-10 05:04:22.135172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.191 qpair failed and we were unable to recover it. 00:27:31.191 [2024-12-10 05:04:22.145053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.191 [2024-12-10 05:04:22.145122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.145135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.145142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.145148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.145163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.155091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.155177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.155191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.155197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.155203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.155219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.165139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.165239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.165254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.165265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.165272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.165289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.175148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.175204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.175217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.175224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.175231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.175246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.185197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.185256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.185269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.185276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.185282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.185297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.195213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.195269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.195281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.195288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.195294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.195309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.205247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.205296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.205309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.205315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.205321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.205337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.215205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.215262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.215275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.215283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.215289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.215305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.225233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.225296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.225310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.225317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.225323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.225339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.235329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.235428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.235441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.235448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.235454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.235470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.245351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.245404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.245418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.245425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.245431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.245446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.255303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.255398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.255411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.255418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.255424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.255439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.265449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.265507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.265519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.265526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.265533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.192 [2024-12-10 05:04:22.265547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.192 qpair failed and we were unable to recover it. 00:27:31.192 [2024-12-10 05:04:22.275442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.192 [2024-12-10 05:04:22.275524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.192 [2024-12-10 05:04:22.275536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.192 [2024-12-10 05:04:22.275544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.192 [2024-12-10 05:04:22.275550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.193 [2024-12-10 05:04:22.275565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.193 qpair failed and we were unable to recover it. 00:27:31.193 [2024-12-10 05:04:22.285455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.193 [2024-12-10 05:04:22.285507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.193 [2024-12-10 05:04:22.285519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.193 [2024-12-10 05:04:22.285526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.193 [2024-12-10 05:04:22.285532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.193 [2024-12-10 05:04:22.285547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.193 qpair failed and we were unable to recover it. 00:27:31.193 [2024-12-10 05:04:22.295486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.193 [2024-12-10 05:04:22.295533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.193 [2024-12-10 05:04:22.295547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.193 [2024-12-10 05:04:22.295556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.193 [2024-12-10 05:04:22.295562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.193 [2024-12-10 05:04:22.295578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.193 qpair failed and we were unable to recover it. 00:27:31.193 [2024-12-10 05:04:22.305517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.193 [2024-12-10 05:04:22.305572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.193 [2024-12-10 05:04:22.305586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.193 [2024-12-10 05:04:22.305593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.193 [2024-12-10 05:04:22.305599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.193 [2024-12-10 05:04:22.305614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.193 qpair failed and we were unable to recover it. 00:27:31.193 [2024-12-10 05:04:22.315575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.193 [2024-12-10 05:04:22.315639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.193 [2024-12-10 05:04:22.315653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.193 [2024-12-10 05:04:22.315660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.193 [2024-12-10 05:04:22.315666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.193 [2024-12-10 05:04:22.315681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.193 qpair failed and we were unable to recover it. 00:27:31.453 [2024-12-10 05:04:22.325567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.453 [2024-12-10 05:04:22.325622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.453 [2024-12-10 05:04:22.325636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.453 [2024-12-10 05:04:22.325643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.453 [2024-12-10 05:04:22.325649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.453 [2024-12-10 05:04:22.325664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.453 qpair failed and we were unable to recover it. 00:27:31.453 [2024-12-10 05:04:22.335599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.453 [2024-12-10 05:04:22.335650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.453 [2024-12-10 05:04:22.335663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.453 [2024-12-10 05:04:22.335670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.453 [2024-12-10 05:04:22.335676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.453 [2024-12-10 05:04:22.335695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.453 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.345626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.345682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.345695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.345702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.345709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.345723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.355588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.355649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.355662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.355670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.355676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.355692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.365675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.365731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.365744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.365751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.365757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.365772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.375643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.375742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.375756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.375762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.375768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.375783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.385754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.385811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.385825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.385832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.385838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.385853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.395822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.395880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.395893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.395899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.395906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.395921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.405819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.405909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.405922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.405929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.405935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.405951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.415814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.415914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.415927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.415934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.415941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.415957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.425826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.425896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.425913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.425920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.425927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.425942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.435910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.435975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.435988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.435995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.436001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.436016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.445953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.446007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.446020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.446027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.446033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.446049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.455967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.456036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.456049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.456056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.456062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.456077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.465985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.454 [2024-12-10 05:04:22.466040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.454 [2024-12-10 05:04:22.466053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.454 [2024-12-10 05:04:22.466060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.454 [2024-12-10 05:04:22.466069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.454 [2024-12-10 05:04:22.466084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.454 qpair failed and we were unable to recover it. 00:27:31.454 [2024-12-10 05:04:22.476008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.476063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.476075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.476082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.476089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.476105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.486050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.486108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.486122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.486129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.486136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.486151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.496070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.496128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.496141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.496148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.496155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.496182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.506104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.506161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.506180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.506187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.506194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.506208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.516136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.516201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.516215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.516222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.516228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.516243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.526206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.526286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.526299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.526306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.526313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.526329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.536213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.536264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.536276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.536283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.536289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.536304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.546253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.546359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.546376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.546383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.546390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.546406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.556252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.556309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.556326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.556333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.556340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.556355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.566331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.566387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.566401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.566408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.566414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.566429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.455 [2024-12-10 05:04:22.576227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.455 [2024-12-10 05:04:22.576280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.455 [2024-12-10 05:04:22.576293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.455 [2024-12-10 05:04:22.576300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.455 [2024-12-10 05:04:22.576306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.455 [2024-12-10 05:04:22.576322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.455 qpair failed and we were unable to recover it. 00:27:31.716 [2024-12-10 05:04:22.586358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.716 [2024-12-10 05:04:22.586418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.716 [2024-12-10 05:04:22.586431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.716 [2024-12-10 05:04:22.586438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.716 [2024-12-10 05:04:22.586445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.716 [2024-12-10 05:04:22.586460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.716 qpair failed and we were unable to recover it. 00:27:31.716 [2024-12-10 05:04:22.596384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.716 [2024-12-10 05:04:22.596452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.716 [2024-12-10 05:04:22.596467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.716 [2024-12-10 05:04:22.596475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.716 [2024-12-10 05:04:22.596484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.596499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.606394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.606455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.606468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.606475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.606481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.606497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.616407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.616462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.616475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.616482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.616488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.616504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.626456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.626513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.626526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.626533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.626539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.626554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.636502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.636563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.636576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.636583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.636589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.636605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.646534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.646597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.646610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.646618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.646624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.646639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.656499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.656559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.656572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.656580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.656586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.656601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.666546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.666603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.666616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.666623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.666629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.666644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.676601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.676679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.676692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.676700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.676706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.676721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.686617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.686700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.686716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.686723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.686729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.686744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.696700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.696752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.696767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.696774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.696780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.696796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.706749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.706834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.706849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.706857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.706864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.706881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.716688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.716748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.717 [2024-12-10 05:04:22.716761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.717 [2024-12-10 05:04:22.716768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.717 [2024-12-10 05:04:22.716774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.717 [2024-12-10 05:04:22.716789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.717 qpair failed and we were unable to recover it. 00:27:31.717 [2024-12-10 05:04:22.726733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.717 [2024-12-10 05:04:22.726787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.726800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.726809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.726816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.726830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.736785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.736841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.736855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.736863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.736869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.736885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.746774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.746865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.746879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.746886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.746892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.746907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.756749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.756800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.756812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.756819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.756825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.756840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.766829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.766882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.766894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.766901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.766907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.766922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.776887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.776940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.776953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.776960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.776967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.776982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.786896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.786958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.786971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.786979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.786985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.787000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.796933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.796994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.797008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.797015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.797021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.797036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.806980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.807033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.807046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.807053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.807059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.807074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.816993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.817053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.817066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.817073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.817080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.817095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.826982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.827039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.827053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.827060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.827067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.827082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.837039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.837098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.837111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.837118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.837125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.837141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.718 [2024-12-10 05:04:22.847056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.718 [2024-12-10 05:04:22.847113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.718 [2024-12-10 05:04:22.847125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.718 [2024-12-10 05:04:22.847132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.718 [2024-12-10 05:04:22.847139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.718 [2024-12-10 05:04:22.847154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.718 qpair failed and we were unable to recover it. 00:27:31.979 [2024-12-10 05:04:22.857135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.979 [2024-12-10 05:04:22.857193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.979 [2024-12-10 05:04:22.857207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.979 [2024-12-10 05:04:22.857219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.979 [2024-12-10 05:04:22.857225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.979 [2024-12-10 05:04:22.857240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.979 qpair failed and we were unable to recover it. 00:27:31.979 [2024-12-10 05:04:22.867139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.867204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.867217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.867224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.867230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.867246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.877153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.877256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.877269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.877276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.877283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.877298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.887175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.887225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.887238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.887245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.887251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.887267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.897224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.897273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.897286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.897293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.897299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.897317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.907248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.907304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.907317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.907324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.907331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.907346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.917277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.917330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.917343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.917350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.917356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.917372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.927293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.927351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.927364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.927370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.927376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.927392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.937320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.937377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.937390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.937398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.937404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.937420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.947355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.947414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.947427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.947434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.947440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.947455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.957384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.957442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.957455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.957461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.957468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.957483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.967423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.967507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.967520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.967527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.967533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.967548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.977446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.977501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.977516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.977523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.977530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.977546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.987488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.980 [2024-12-10 05:04:22.987548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.980 [2024-12-10 05:04:22.987565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.980 [2024-12-10 05:04:22.987572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.980 [2024-12-10 05:04:22.987578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.980 [2024-12-10 05:04:22.987594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.980 qpair failed and we were unable to recover it. 00:27:31.980 [2024-12-10 05:04:22.997520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:22.997575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:22.997589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:22.997596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:22.997603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:22.997619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.007501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.007598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.007612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.007619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.007625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.007642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.017572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.017655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.017669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.017675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.017681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.017696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.027608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.027667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.027679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.027686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.027695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.027711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.037638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.037696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.037709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.037716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.037722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.037737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.047693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.047746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.047759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.047766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.047773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.047788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.057688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.057741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.057753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.057760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.057766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.057781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.067732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.067791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.067804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.067811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.067817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.067832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.077804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.077874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.077888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.077894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.077900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.077915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.087779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.087830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.087846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.087853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.087860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.087875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.097811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.097863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.097876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.097883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.097889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.097905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:31.981 [2024-12-10 05:04:23.107884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.981 [2024-12-10 05:04:23.107938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.981 [2024-12-10 05:04:23.107952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.981 [2024-12-10 05:04:23.107959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.981 [2024-12-10 05:04:23.107965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:31.981 [2024-12-10 05:04:23.107980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.981 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.117862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.117916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.117933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.117940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.117947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.117962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.127982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.128043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.128057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.128064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.128071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.128086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.137965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.138022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.138035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.138042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.138048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.138064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.147988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.148045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.148059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.148066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.148072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.148087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.158045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.158126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.158139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.158146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.158155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.158174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.168039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.168098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.168111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.168119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.168125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.168140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.178089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.178147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.178160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.178171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.178178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.178193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.188075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.188134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.188147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.188154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.188161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.188179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.198110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.242 [2024-12-10 05:04:23.198172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.242 [2024-12-10 05:04:23.198186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.242 [2024-12-10 05:04:23.198193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.242 [2024-12-10 05:04:23.198199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.242 [2024-12-10 05:04:23.198215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.242 qpair failed and we were unable to recover it. 00:27:32.242 [2024-12-10 05:04:23.208128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.208185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.208199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.208205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.208212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.208227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.218212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.218273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.218286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.218293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.218299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.218315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.228241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.228298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.228311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.228318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.228324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.228338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.238250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.238307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.238320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.238327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.238333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.238348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.248264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.248314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.248330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.248337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.248343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.248359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.258312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.258372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.258385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.258392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.258398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.258413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.268344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.268402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.268417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.268425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.268432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.268447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.278333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.278390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.278403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.278410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.278416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.278431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.288353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.288407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.288420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.288429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.288436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.288451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.298375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.298425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.298438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.298445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.298450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.298466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.308417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.308470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.308485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.308492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.308499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.308514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.318442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.318495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.318507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.318514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.318520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.318535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.328467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.328553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.328566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.243 [2024-12-10 05:04:23.328573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.243 [2024-12-10 05:04:23.328579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.243 [2024-12-10 05:04:23.328601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.243 qpair failed and we were unable to recover it. 00:27:32.243 [2024-12-10 05:04:23.338499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.243 [2024-12-10 05:04:23.338564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.243 [2024-12-10 05:04:23.338577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.244 [2024-12-10 05:04:23.338585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.244 [2024-12-10 05:04:23.338591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.244 [2024-12-10 05:04:23.338605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.244 qpair failed and we were unable to recover it. 00:27:32.244 [2024-12-10 05:04:23.348560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.244 [2024-12-10 05:04:23.348630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.244 [2024-12-10 05:04:23.348643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.244 [2024-12-10 05:04:23.348650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.244 [2024-12-10 05:04:23.348656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.244 [2024-12-10 05:04:23.348671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.244 qpair failed and we were unable to recover it. 00:27:32.244 [2024-12-10 05:04:23.358528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.244 [2024-12-10 05:04:23.358596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.244 [2024-12-10 05:04:23.358609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.244 [2024-12-10 05:04:23.358616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.244 [2024-12-10 05:04:23.358622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.244 [2024-12-10 05:04:23.358637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.244 qpair failed and we were unable to recover it. 00:27:32.244 [2024-12-10 05:04:23.368529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.244 [2024-12-10 05:04:23.368581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.244 [2024-12-10 05:04:23.368594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.244 [2024-12-10 05:04:23.368601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.244 [2024-12-10 05:04:23.368607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.244 [2024-12-10 05:04:23.368621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.244 qpair failed and we were unable to recover it. 00:27:32.504 [2024-12-10 05:04:23.378636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.504 [2024-12-10 05:04:23.378740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.504 [2024-12-10 05:04:23.378753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.504 [2024-12-10 05:04:23.378760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.504 [2024-12-10 05:04:23.378766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.504 [2024-12-10 05:04:23.378781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.504 qpair failed and we were unable to recover it. 00:27:32.504 [2024-12-10 05:04:23.388641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.504 [2024-12-10 05:04:23.388698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.504 [2024-12-10 05:04:23.388711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.504 [2024-12-10 05:04:23.388718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.504 [2024-12-10 05:04:23.388724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.504 [2024-12-10 05:04:23.388739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.504 qpair failed and we were unable to recover it. 00:27:32.504 [2024-12-10 05:04:23.398709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.504 [2024-12-10 05:04:23.398769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.504 [2024-12-10 05:04:23.398783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.504 [2024-12-10 05:04:23.398790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.504 [2024-12-10 05:04:23.398796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.504 [2024-12-10 05:04:23.398811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.504 qpair failed and we were unable to recover it. 00:27:32.504 [2024-12-10 05:04:23.408699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.504 [2024-12-10 05:04:23.408771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.504 [2024-12-10 05:04:23.408784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.504 [2024-12-10 05:04:23.408791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.504 [2024-12-10 05:04:23.408797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.504 [2024-12-10 05:04:23.408811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.504 qpair failed and we were unable to recover it. 00:27:32.504 [2024-12-10 05:04:23.418717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.504 [2024-12-10 05:04:23.418769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.504 [2024-12-10 05:04:23.418782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.504 [2024-12-10 05:04:23.418792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.504 [2024-12-10 05:04:23.418798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.418813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.428777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.428879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.428893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.428899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.428905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.428921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.438772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.438849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.438863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.438870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.438876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.438891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.448848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.448912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.448926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.448933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.448939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.448954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.458817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.458871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.458884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.458890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.458897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.458915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.468858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.468927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.468940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.468947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.468953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.468968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.478878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.478929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.478942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.478949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.478955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.478970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.488927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.489003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.489016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.489023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.489029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.489043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.498975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.499031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.499045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.499052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.499058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.499074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.508971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.509027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.509040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.509047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.509054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.509069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.518991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.519051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.519065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.519072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.519078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.519093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.529046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.529150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.529169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.529177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.529184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.529199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.538965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.539022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.539036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.539044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.539050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.539065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.549072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.505 [2024-12-10 05:04:23.549126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.505 [2024-12-10 05:04:23.549143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.505 [2024-12-10 05:04:23.549151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.505 [2024-12-10 05:04:23.549157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.505 [2024-12-10 05:04:23.549175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.505 qpair failed and we were unable to recover it. 00:27:32.505 [2024-12-10 05:04:23.559039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.559093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.559107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.559114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.559120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.559135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.569144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.569199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.569215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.569222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.569229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.569244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.579136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.579194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.579207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.579214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.579220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.579235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.589205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.589258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.589271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.589277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.589286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.589302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.599198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.599305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.599318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.599326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.599333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.599348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.609216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.609267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.609280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.609287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.609293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.609308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.619253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.619302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.619315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.619321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.619327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.619343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.506 [2024-12-10 05:04:23.629329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.506 [2024-12-10 05:04:23.629414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.506 [2024-12-10 05:04:23.629427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.506 [2024-12-10 05:04:23.629434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.506 [2024-12-10 05:04:23.629440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.506 [2024-12-10 05:04:23.629455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.506 qpair failed and we were unable to recover it. 00:27:32.766 [2024-12-10 05:04:23.639334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.766 [2024-12-10 05:04:23.639394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.766 [2024-12-10 05:04:23.639407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.766 [2024-12-10 05:04:23.639414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.766 [2024-12-10 05:04:23.639421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.766 [2024-12-10 05:04:23.639435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.766 qpair failed and we were unable to recover it. 00:27:32.766 [2024-12-10 05:04:23.649340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.766 [2024-12-10 05:04:23.649409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.766 [2024-12-10 05:04:23.649421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.766 [2024-12-10 05:04:23.649428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.766 [2024-12-10 05:04:23.649434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.766 [2024-12-10 05:04:23.649449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.766 qpair failed and we were unable to recover it. 00:27:32.766 [2024-12-10 05:04:23.659376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.766 [2024-12-10 05:04:23.659432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.766 [2024-12-10 05:04:23.659444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.766 [2024-12-10 05:04:23.659451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.766 [2024-12-10 05:04:23.659457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.766 [2024-12-10 05:04:23.659472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.766 qpair failed and we were unable to recover it. 00:27:32.766 [2024-12-10 05:04:23.669451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.766 [2024-12-10 05:04:23.669509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.669522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.669530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.669536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.669551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.679419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.679471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.679487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.679493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.679500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.679515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.689446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.689506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.689519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.689526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.689533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.689548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.699476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.699533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.699546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.699553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.699560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.699575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.709522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.709577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.709590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.709597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.709603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.709618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.719532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.719585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.719598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.719605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.719614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.719630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.729552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.729605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.729619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.729626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.729633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.729649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.739624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.739685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.739699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.739706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.739713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.739728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.749630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.749685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.749698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.749705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.749711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.749727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.759642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.759700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.759714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.759721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.759727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.759742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.769672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.769731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.769744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.769751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.769757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.769773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.779614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.779664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.779677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.779683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.779689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.779705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.789778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.789833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.789846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.789852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.767 [2024-12-10 05:04:23.789858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.767 [2024-12-10 05:04:23.789873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.767 qpair failed and we were unable to recover it. 00:27:32.767 [2024-12-10 05:04:23.799732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.767 [2024-12-10 05:04:23.799794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.767 [2024-12-10 05:04:23.799808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.767 [2024-12-10 05:04:23.799815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.799822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.799837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.809824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.809920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.809937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.809944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.809951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.809966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.819750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.819799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.819814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.819821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.819827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.819843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.829852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.829933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.829946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.829953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.829959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.829974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.839905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.839960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.839974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.839981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.839986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.840002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.849908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.849963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.849976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.849986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.849993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.850008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.859850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.859905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.859919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.859926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.859933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.859948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.869903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.869960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.869973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.869979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.869985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.870000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.879911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.879961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.879974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.879980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.879986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.880001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:32.768 [2024-12-10 05:04:23.890050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.768 [2024-12-10 05:04:23.890099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.768 [2024-12-10 05:04:23.890112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.768 [2024-12-10 05:04:23.890118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.768 [2024-12-10 05:04:23.890124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:32.768 [2024-12-10 05:04:23.890143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.768 qpair failed and we were unable to recover it. 00:27:33.028 [2024-12-10 05:04:23.900015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.028 [2024-12-10 05:04:23.900078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.028 [2024-12-10 05:04:23.900092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.028 [2024-12-10 05:04:23.900099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.028 [2024-12-10 05:04:23.900105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.028 [2024-12-10 05:04:23.900121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.028 qpair failed and we were unable to recover it. 00:27:33.028 [2024-12-10 05:04:23.910056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.028 [2024-12-10 05:04:23.910112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.910125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.910132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.910139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.910154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.920092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.920144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.920157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.920164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.920176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.920191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.930119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.930181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.930194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.930200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.930207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.930222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.940110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.940164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.940184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.940191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.940198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.940214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.950118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.950195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.950210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.950217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.950223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.950240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.960213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.960266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.960279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.960285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.960291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.960305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.970187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.970243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.970258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.970264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.970271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.970287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.980333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.980392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.980407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.980418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.980426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.980441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:23.990326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:23.990384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:23.990397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:23.990404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:23.990410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:23.990425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:24.000329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:24.000388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:24.000403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:24.000410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:24.000417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:24.000434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:24.010325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:24.010381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:24.010394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:24.010401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:24.010407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:24.010422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:24.020369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:24.020426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:24.020440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:24.020447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:24.020453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:24.020472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:24.030429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:24.030485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.029 [2024-12-10 05:04:24.030499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.029 [2024-12-10 05:04:24.030506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.029 [2024-12-10 05:04:24.030512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.029 [2024-12-10 05:04:24.030528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.029 qpair failed and we were unable to recover it. 00:27:33.029 [2024-12-10 05:04:24.040494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.029 [2024-12-10 05:04:24.040562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.040576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.040583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.040589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.040604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.050404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.050462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.050475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.050483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.050489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.050504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.060442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.060539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.060554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.060562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.060569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.060585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.070471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.070530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.070543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.070550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.070557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.070573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.080576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.080632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.080645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.080652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.080659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.080674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.090675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.090724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.090737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.090743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.090749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.090765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.100609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.100661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.100675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.100682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.100688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.100703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.110688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.110744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.110760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.110767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.110773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.110787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.120660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.120724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.120737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.120744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.120750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.120765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.130642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.130697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.130710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.130716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.130722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.130737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.140742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.140796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.140808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.140815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.140821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.140835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.030 [2024-12-10 05:04:24.150749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.030 [2024-12-10 05:04:24.150814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.030 [2024-12-10 05:04:24.150827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.030 [2024-12-10 05:04:24.150834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.030 [2024-12-10 05:04:24.150843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.030 [2024-12-10 05:04:24.150858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.030 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.160753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.160848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.160861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.160868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.160874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.160889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.170824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.170878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.170891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.170898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.170904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.170919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.180848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.180902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.180915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.180922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.180928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.180943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.190890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.190942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.190954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.190960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.190966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.190980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.200973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.201026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.201038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.201044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.201050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.201064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.210870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.210940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.210952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.210958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.210964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.210978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.220993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.221056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.221069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.291 [2024-12-10 05:04:24.221075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.291 [2024-12-10 05:04:24.221081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.291 [2024-12-10 05:04:24.221095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.291 qpair failed and we were unable to recover it. 00:27:33.291 [2024-12-10 05:04:24.231007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.291 [2024-12-10 05:04:24.231061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.291 [2024-12-10 05:04:24.231074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.231080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.231086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.231100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.241032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.241110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.241127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.241133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.241139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.241153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.251060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.251113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.251126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.251132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.251138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.251153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.261099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.261197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.261210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.261217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.261223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.261237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.271128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.271183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.271196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.271202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.271209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.271224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.281148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.281211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.281225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.281232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.281241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.281257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.291196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.291263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.291276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.291283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.291289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.291304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.301208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.301268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.301282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.301289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.301295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.301311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.311249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.311301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.311314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.311321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.311327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.311343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.321269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.321329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.321342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.321348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.321355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.321371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.331333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.331390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.331404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.331411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.331418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.331432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.341371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.341427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.341440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.341447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.341453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.341469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.351360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.351423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.351436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.351443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.351449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.292 [2024-12-10 05:04:24.351464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.292 qpair failed and we were unable to recover it. 00:27:33.292 [2024-12-10 05:04:24.361390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.292 [2024-12-10 05:04:24.361448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.292 [2024-12-10 05:04:24.361461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.292 [2024-12-10 05:04:24.361468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.292 [2024-12-10 05:04:24.361474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.361490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.293 [2024-12-10 05:04:24.371411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.293 [2024-12-10 05:04:24.371469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.293 [2024-12-10 05:04:24.371485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.293 [2024-12-10 05:04:24.371492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.293 [2024-12-10 05:04:24.371498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.371513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.293 [2024-12-10 05:04:24.381430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.293 [2024-12-10 05:04:24.381486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.293 [2024-12-10 05:04:24.381499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.293 [2024-12-10 05:04:24.381505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.293 [2024-12-10 05:04:24.381512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.381527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.293 [2024-12-10 05:04:24.391517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.293 [2024-12-10 05:04:24.391574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.293 [2024-12-10 05:04:24.391587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.293 [2024-12-10 05:04:24.391593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.293 [2024-12-10 05:04:24.391600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.391614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.293 [2024-12-10 05:04:24.401493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.293 [2024-12-10 05:04:24.401544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.293 [2024-12-10 05:04:24.401557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.293 [2024-12-10 05:04:24.401564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.293 [2024-12-10 05:04:24.401570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.401585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.293 [2024-12-10 05:04:24.411515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.293 [2024-12-10 05:04:24.411567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.293 [2024-12-10 05:04:24.411580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.293 [2024-12-10 05:04:24.411590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.293 [2024-12-10 05:04:24.411596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.411611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.293 [2024-12-10 05:04:24.421553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.293 [2024-12-10 05:04:24.421608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.293 [2024-12-10 05:04:24.421621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.293 [2024-12-10 05:04:24.421628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.293 [2024-12-10 05:04:24.421634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.293 [2024-12-10 05:04:24.421649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.293 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.431584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.431644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.431658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.431666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.431673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.431689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.441606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.441667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.441680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.441687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.441693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.441708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.451628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.451700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.451713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.451720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.451726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.451744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.461663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.461718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.461731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.461737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.461744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.461759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.471778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.471834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.471847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.471854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.471860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.471875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.481777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.481868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.481882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.481888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.481894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.481909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.491738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.491792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.491805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.491812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.491819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.491834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.501776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.501837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.501851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.501857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.501863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.501879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.511810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.511869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.511882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.511889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.511895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.511910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.521829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.521886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.521899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.521906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.521912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.521927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.531851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.554 [2024-12-10 05:04:24.531906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.554 [2024-12-10 05:04:24.531919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.554 [2024-12-10 05:04:24.531926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.554 [2024-12-10 05:04:24.531933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.554 [2024-12-10 05:04:24.531948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.554 qpair failed and we were unable to recover it. 00:27:33.554 [2024-12-10 05:04:24.541882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.541933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.541947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.541957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.541964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.541980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.551878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.551936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.551949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.551956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.551963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.551978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.561968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.562044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.562057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.562064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.562070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.562085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.571975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.572025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.572038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.572045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.572051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.572067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.581993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.582047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.582061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.582068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.582074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.582093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.592046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.592105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.592118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.592125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.592131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.592146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.602071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.602130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.602143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.602150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.602156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.602176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.612088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.612188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.612202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.612210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.612216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.612231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.622119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.622178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.622192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.622200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.622206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.622221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.632219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.632282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.632296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.632303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.632309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.632325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.642230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.642303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.642316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.642323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.642329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.642344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.652260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.652322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.652335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.652342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.652349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.652364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.662254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.662320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.662333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.555 [2024-12-10 05:04:24.662340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.555 [2024-12-10 05:04:24.662346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.555 [2024-12-10 05:04:24.662361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.555 qpair failed and we were unable to recover it. 00:27:33.555 [2024-12-10 05:04:24.672287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.555 [2024-12-10 05:04:24.672342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.555 [2024-12-10 05:04:24.672359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.556 [2024-12-10 05:04:24.672366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.556 [2024-12-10 05:04:24.672372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.556 [2024-12-10 05:04:24.672387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.556 qpair failed and we were unable to recover it. 00:27:33.556 [2024-12-10 05:04:24.682252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.556 [2024-12-10 05:04:24.682309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.556 [2024-12-10 05:04:24.682323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.556 [2024-12-10 05:04:24.682330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.556 [2024-12-10 05:04:24.682336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.556 [2024-12-10 05:04:24.682351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.556 qpair failed and we were unable to recover it. 00:27:33.816 [2024-12-10 05:04:24.692336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.816 [2024-12-10 05:04:24.692405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.816 [2024-12-10 05:04:24.692418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.816 [2024-12-10 05:04:24.692426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.816 [2024-12-10 05:04:24.692432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.816 [2024-12-10 05:04:24.692448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.816 [2024-12-10 05:04:24.702353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.816 [2024-12-10 05:04:24.702423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.816 [2024-12-10 05:04:24.702436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.816 [2024-12-10 05:04:24.702444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.816 [2024-12-10 05:04:24.702450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.816 [2024-12-10 05:04:24.702465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.816 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.712424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.712485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.712498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.712506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.712515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.712531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.722419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.722473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.722487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.722493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.722500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.722515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.732522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.732609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.732623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.732629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.732635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.732650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.742507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.742579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.742593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.742599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.742605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.742620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.752433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.752539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.752552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.752559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.752565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.752581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.762565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.762627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.762640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.762646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.762653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.762668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.772550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.772600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.772613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.772619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.772625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.772641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.782629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.782680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.782693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.782699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.782705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.782720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.792626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.792685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.792698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.792705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.792711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.792726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.802580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.802639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.802655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.802662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.802668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.802683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.812713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.812804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.812817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.812824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.812831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.812846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.822697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.822750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.822764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.822770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.822777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.822793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.832761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.832817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.817 [2024-12-10 05:04:24.832830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.817 [2024-12-10 05:04:24.832837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.817 [2024-12-10 05:04:24.832844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.817 [2024-12-10 05:04:24.832860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.817 qpair failed and we were unable to recover it. 00:27:33.817 [2024-12-10 05:04:24.842753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.817 [2024-12-10 05:04:24.842807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.842820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.842827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.842836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.842851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.852821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.852875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.852888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.852895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.852901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.852916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.862799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.862851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.862864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.862871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.862878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.862894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.872834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.872924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.872937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.872945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.872951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.872965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.882868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.882924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.882936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.882943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.882949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.882965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.892933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.893001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.893015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.893023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.893029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.893044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.902929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.902979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.902993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.903000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.903006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.903021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.912962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.913035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.913048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.913055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.913061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.913077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.922977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.923034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.923048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.923055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.923062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.923077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.932970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.933033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.933047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.933054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.933060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.933075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:33.818 [2024-12-10 05:04:24.943046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.818 [2024-12-10 05:04:24.943097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.818 [2024-12-10 05:04:24.943110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.818 [2024-12-10 05:04:24.943117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.818 [2024-12-10 05:04:24.943124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:33.818 [2024-12-10 05:04:24.943139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.818 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:24.953002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:24.953058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:24.953072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:24.953079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:24.953085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:24.953101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:24.963085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:24.963142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:24.963155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:24.963161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:24.963171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:24.963186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:24.973103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:24.973155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:24.973172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:24.973183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:24.973190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:24.973206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:24.983151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:24.983206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:24.983219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:24.983226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:24.983232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:24.983248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:24.993173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:24.993229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:24.993243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:24.993250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:24.993256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:24.993271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.003200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.003253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.003266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.003273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:25.003279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:25.003295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.013209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.013262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.013275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.013282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:25.013288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:25.013307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.023274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.023336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.023349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.023357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:25.023363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:25.023378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.033277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.033337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.033350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.033357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:25.033365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:25.033381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.043302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.043359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.043372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.043378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:25.043385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:25.043400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.053351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.053440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.053455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.053462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.082 [2024-12-10 05:04:25.053468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.082 [2024-12-10 05:04:25.053484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.082 qpair failed and we were unable to recover it. 00:27:34.082 [2024-12-10 05:04:25.063351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.082 [2024-12-10 05:04:25.063411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.082 [2024-12-10 05:04:25.063424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.082 [2024-12-10 05:04:25.063432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.063438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.063454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.073367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.073423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.073436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.073443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.073449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.073464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.083432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.083496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.083509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.083516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.083523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.083538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.093436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.093492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.093504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.093511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.093518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.093533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.103464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.103516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.103529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.103539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.103545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.103560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.113519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.113584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.113596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.113604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.113610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.113626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.123566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.123623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.123636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.123643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.123649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.123664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.133556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.133612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.133628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.133636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.133643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.133659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.143577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.143629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.143643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.143650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.143657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.143676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.153646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.153704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.153716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.153723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.153730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.153745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.163670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.163739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.163754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.163761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.163767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.163783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.173588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.173645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.173660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.173667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.173674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.173689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.183685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.183742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.183755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.183762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.183768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.183783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.193758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.193815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.193828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.193835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.193841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.193856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.083 [2024-12-10 05:04:25.203748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.083 [2024-12-10 05:04:25.203805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.083 [2024-12-10 05:04:25.203819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.083 [2024-12-10 05:04:25.203825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.083 [2024-12-10 05:04:25.203831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.083 [2024-12-10 05:04:25.203847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.083 qpair failed and we were unable to recover it. 00:27:34.345 [2024-12-10 05:04:25.213798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.345 [2024-12-10 05:04:25.213853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.345 [2024-12-10 05:04:25.213866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.345 [2024-12-10 05:04:25.213873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.345 [2024-12-10 05:04:25.213879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.345 [2024-12-10 05:04:25.213895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.345 qpair failed and we were unable to recover it. 00:27:34.345 [2024-12-10 05:04:25.223849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.345 [2024-12-10 05:04:25.223902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.345 [2024-12-10 05:04:25.223916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.345 [2024-12-10 05:04:25.223923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.345 [2024-12-10 05:04:25.223929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.345 [2024-12-10 05:04:25.223945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.345 qpair failed and we were unable to recover it. 00:27:34.345 [2024-12-10 05:04:25.233769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.345 [2024-12-10 05:04:25.233853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.345 [2024-12-10 05:04:25.233870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.345 [2024-12-10 05:04:25.233877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.345 [2024-12-10 05:04:25.233883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.345 [2024-12-10 05:04:25.233898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.345 qpair failed and we were unable to recover it. 00:27:34.345 [2024-12-10 05:04:25.243930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.345 [2024-12-10 05:04:25.244020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.345 [2024-12-10 05:04:25.244033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.345 [2024-12-10 05:04:25.244041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.345 [2024-12-10 05:04:25.244046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.345 [2024-12-10 05:04:25.244062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.345 qpair failed and we were unable to recover it. 00:27:34.345 [2024-12-10 05:04:25.253911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.345 [2024-12-10 05:04:25.253985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.345 [2024-12-10 05:04:25.253999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.345 [2024-12-10 05:04:25.254006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.345 [2024-12-10 05:04:25.254012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.345 [2024-12-10 05:04:25.254027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.345 qpair failed and we were unable to recover it. 00:27:34.345 [2024-12-10 05:04:25.263912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.263971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.263984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.263990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.263997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.264012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.273955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.274023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.274039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.274046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.274056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.274071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.283972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.284029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.284042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.284048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.284055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.284070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.293999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.294054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.294067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.294074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.294080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.294095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.304055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.304108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.304122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.304129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.304135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.304150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.314058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.314112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.314125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.314132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.314138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.314154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.324089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.324149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.324161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.324172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.324178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.324195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.334106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.334157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.334174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.334181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.334187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.334202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.344134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.344188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.344203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.344210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.344217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.344233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.354181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.354239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.354251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.354258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.354264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.354279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.364196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.364255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.364271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.364278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.364284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.364300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.374239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.374299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.374312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.374318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.374325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.374340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.384209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.384301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.384315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.384321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.346 [2024-12-10 05:04:25.384327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.346 [2024-12-10 05:04:25.384342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.346 qpair failed and we were unable to recover it. 00:27:34.346 [2024-12-10 05:04:25.394251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.346 [2024-12-10 05:04:25.394309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.346 [2024-12-10 05:04:25.394324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.346 [2024-12-10 05:04:25.394331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.394337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.394352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.404323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.404386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.404399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.404407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.404417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.404432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.414363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.414414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.414428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.414434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.414440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.414457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.424330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.424416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.424429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.424436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.424442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.424457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.434407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.434462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.434475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.434481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.434488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.434503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.444332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.444392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.444404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.444411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.444417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.444434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.454435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.454526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.454540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.454547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.454553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.454568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.464435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.464518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.464531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.464538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.464545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.464559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.347 [2024-12-10 05:04:25.474432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.347 [2024-12-10 05:04:25.474495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.347 [2024-12-10 05:04:25.474507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.347 [2024-12-10 05:04:25.474514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.347 [2024-12-10 05:04:25.474520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.347 [2024-12-10 05:04:25.474536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.347 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.484501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.484556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.484570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.484577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.607 [2024-12-10 05:04:25.484583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.607 [2024-12-10 05:04:25.484599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.607 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.494633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.494720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.494734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.494740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.607 [2024-12-10 05:04:25.494746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.607 [2024-12-10 05:04:25.494762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.607 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.504552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.504642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.504655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.504662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.607 [2024-12-10 05:04:25.504668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.607 [2024-12-10 05:04:25.504683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.607 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.514534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.514590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.514603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.514609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.607 [2024-12-10 05:04:25.514615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.607 [2024-12-10 05:04:25.514630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.607 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.524602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.524702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.524717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.524723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.607 [2024-12-10 05:04:25.524730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.607 [2024-12-10 05:04:25.524745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.607 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.534670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.534723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.534737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.534750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.607 [2024-12-10 05:04:25.534757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.607 [2024-12-10 05:04:25.534773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.607 qpair failed and we were unable to recover it. 00:27:34.607 [2024-12-10 05:04:25.544620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.607 [2024-12-10 05:04:25.544675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.607 [2024-12-10 05:04:25.544689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.607 [2024-12-10 05:04:25.544696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.544703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.544718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.554721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.554822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.554838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.554845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.554851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.554867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.564667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.564733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.564746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.564754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.564760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.564776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.574783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.574838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.574850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.574857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.574863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.574882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.584806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.584858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.584872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.584878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.584885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.584900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.594830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.594887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.594900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.594907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.594914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.594929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.604852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.604912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.604925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.604932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.604938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.604953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.614904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.614961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.614974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.614981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.614987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.615002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.624914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.624972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.624986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.624992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.624999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.625014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.634958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.635014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.635027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.635034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.635040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.635055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.644973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.645026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.645039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.645045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.645051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.645067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.654998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.655055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.655068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.655075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.655082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.655097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.665023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.665074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.665087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.665097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.665103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.665118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.608 [2024-12-10 05:04:25.675067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.608 [2024-12-10 05:04:25.675123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.608 [2024-12-10 05:04:25.675136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.608 [2024-12-10 05:04:25.675143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.608 [2024-12-10 05:04:25.675149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.608 [2024-12-10 05:04:25.675164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.608 qpair failed and we were unable to recover it. 00:27:34.609 [2024-12-10 05:04:25.685097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.609 [2024-12-10 05:04:25.685152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.609 [2024-12-10 05:04:25.685170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.609 [2024-12-10 05:04:25.685178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.609 [2024-12-10 05:04:25.685185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.609 [2024-12-10 05:04:25.685201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.609 qpair failed and we were unable to recover it. 00:27:34.609 [2024-12-10 05:04:25.695146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.609 [2024-12-10 05:04:25.695215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.609 [2024-12-10 05:04:25.695229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.609 [2024-12-10 05:04:25.695236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.609 [2024-12-10 05:04:25.695242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.609 [2024-12-10 05:04:25.695261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.609 qpair failed and we were unable to recover it. 00:27:34.609 [2024-12-10 05:04:25.705152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.609 [2024-12-10 05:04:25.705240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.609 [2024-12-10 05:04:25.705254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.609 [2024-12-10 05:04:25.705261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.609 [2024-12-10 05:04:25.705266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.609 [2024-12-10 05:04:25.705285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.609 qpair failed and we were unable to recover it. 00:27:34.609 [2024-12-10 05:04:25.715209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.609 [2024-12-10 05:04:25.715273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.609 [2024-12-10 05:04:25.715286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.609 [2024-12-10 05:04:25.715294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.609 [2024-12-10 05:04:25.715300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.609 [2024-12-10 05:04:25.715315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.609 qpair failed and we were unable to recover it. 00:27:34.609 [2024-12-10 05:04:25.725270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.609 [2024-12-10 05:04:25.725370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.609 [2024-12-10 05:04:25.725383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.609 [2024-12-10 05:04:25.725390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.609 [2024-12-10 05:04:25.725396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.609 [2024-12-10 05:04:25.725412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.609 qpair failed and we were unable to recover it. 00:27:34.609 [2024-12-10 05:04:25.735340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.609 [2024-12-10 05:04:25.735440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.609 [2024-12-10 05:04:25.735452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.609 [2024-12-10 05:04:25.735459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.609 [2024-12-10 05:04:25.735465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.609 [2024-12-10 05:04:25.735480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.609 qpair failed and we were unable to recover it. 00:27:34.869 [2024-12-10 05:04:25.745302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.869 [2024-12-10 05:04:25.745355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.869 [2024-12-10 05:04:25.745369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.869 [2024-12-10 05:04:25.745376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.869 [2024-12-10 05:04:25.745382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.869 [2024-12-10 05:04:25.745398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.869 qpair failed and we were unable to recover it. 00:27:34.869 [2024-12-10 05:04:25.755375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.869 [2024-12-10 05:04:25.755430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.869 [2024-12-10 05:04:25.755444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.869 [2024-12-10 05:04:25.755451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.869 [2024-12-10 05:04:25.755457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.869 [2024-12-10 05:04:25.755472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.869 qpair failed and we were unable to recover it. 00:27:34.869 [2024-12-10 05:04:25.765314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.869 [2024-12-10 05:04:25.765368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.869 [2024-12-10 05:04:25.765381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.869 [2024-12-10 05:04:25.765388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.869 [2024-12-10 05:04:25.765394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.869 [2024-12-10 05:04:25.765409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.869 qpair failed and we were unable to recover it. 00:27:34.869 [2024-12-10 05:04:25.775340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.869 [2024-12-10 05:04:25.775392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.869 [2024-12-10 05:04:25.775405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.869 [2024-12-10 05:04:25.775412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.869 [2024-12-10 05:04:25.775418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.869 [2024-12-10 05:04:25.775434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.869 qpair failed and we were unable to recover it. 00:27:34.869 [2024-12-10 05:04:25.785358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.869 [2024-12-10 05:04:25.785407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.869 [2024-12-10 05:04:25.785420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.785427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.785434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.785450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.795458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.795518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.795533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.795540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.795547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.795562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.805410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.805483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.805496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.805503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.805509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.805524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.815380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.815441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.815454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.815461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.815467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.815482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.825482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.825545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.825559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.825567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.825573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.825588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.835515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.835570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.835583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.835590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.835600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.835616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.845536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.845592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.845606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.845613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.845619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.845634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.855555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.855637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.855650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.855658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.855664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.855679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.865607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.865655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.865669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.865675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.865681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.865697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.875626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.875680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.875693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.875700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.875706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.875721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.885645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.885699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.885712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.885720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.885726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.885741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.895672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.895719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.895733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.895740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.895746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.895761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.905689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.905759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.905772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.905779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.905785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.870 [2024-12-10 05:04:25.905800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.870 qpair failed and we were unable to recover it. 00:27:34.870 [2024-12-10 05:04:25.915726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.870 [2024-12-10 05:04:25.915782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.870 [2024-12-10 05:04:25.915795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.870 [2024-12-10 05:04:25.915802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.870 [2024-12-10 05:04:25.915808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.915824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.925744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.925799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.925816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.925822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.925829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.925844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.935776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.935836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.935851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.935858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.935865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.935880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.945791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.945847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.945861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.945869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.945875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.945891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.955810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.955878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.955891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.955898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.955904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.955920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.965871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.965946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.965959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.965967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.965976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.965992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.975946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.976003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.976017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.976024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.976030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.976045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.985923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.985979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.985993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.986000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.986007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.986021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:34.871 [2024-12-10 05:04:25.995954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.871 [2024-12-10 05:04:25.996009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.871 [2024-12-10 05:04:25.996023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.871 [2024-12-10 05:04:25.996030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.871 [2024-12-10 05:04:25.996036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:34.871 [2024-12-10 05:04:25.996051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.871 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.005957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.006017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.006032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.006040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.006047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.006063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.015953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.016003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.016016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.016023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.016029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.016045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.026057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.026109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.026123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.026130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.026136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.026151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.036075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.036129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.036142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.036149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.036156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.036175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.046022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.046083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.046097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.046104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.046110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.046126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.056120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.056185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.056199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.056206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.056212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.056227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.066143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.066199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.066213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.066219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.066226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.066241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.076198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.076254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.076267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.076274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.076280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.076296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.086206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.086260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.086273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.086280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.086286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.086301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.096231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.096326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.096339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.096349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.096355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.096371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.106262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.106331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.106344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.106351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.106357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.106372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.116379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.132 [2024-12-10 05:04:26.116435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.132 [2024-12-10 05:04:26.116448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.132 [2024-12-10 05:04:26.116455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.132 [2024-12-10 05:04:26.116461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.132 [2024-12-10 05:04:26.116476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.132 qpair failed and we were unable to recover it. 00:27:35.132 [2024-12-10 05:04:26.126356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.126420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.126434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.126441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.126448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.126462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.136356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.136415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.136428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.136436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.136442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.136461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.146380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.146434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.146448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.146455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.146461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.146476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.156428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.156482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.156495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.156502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.156508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.156524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.166513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.166568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.166581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.166588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.166595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.166611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.176394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.176447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.176460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.176467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.176474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.176489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.186466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.186523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.186536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.186543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.186550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.186565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.196551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.196608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.196621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.196628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.196635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f58e0000b90 00:27:35.133 [2024-12-10 05:04:26.196650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.196756] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:35.133 A controller has encountered a failure and is being reset. 00:27:35.133 [2024-12-10 05:04:26.206572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.206679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.206737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.206765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.206789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12521a0 00:27:35.133 [2024-12-10 05:04:26.206839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 [2024-12-10 05:04:26.216526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.133 [2024-12-10 05:04:26.216601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.133 [2024-12-10 05:04:26.216631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.133 [2024-12-10 05:04:26.216646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.133 [2024-12-10 05:04:26.216661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12521a0 00:27:35.133 [2024-12-10 05:04:26.216693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.133 qpair failed and we were unable to recover it. 00:27:35.133 Controller properly reset. 00:27:35.133 Initializing NVMe Controllers 00:27:35.133 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:35.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:35.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:35.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:35.133 Initialization complete. Launching workers. 00:27:35.133 Starting thread on core 1 00:27:35.133 Starting thread on core 2 00:27:35.133 Starting thread on core 3 00:27:35.133 Starting thread on core 0 00:27:35.133 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:35.133 00:27:35.133 real 0m11.371s 00:27:35.133 user 0m21.822s 00:27:35.133 sys 0m4.638s 00:27:35.133 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.133 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.133 ************************************ 00:27:35.133 END TEST nvmf_target_disconnect_tc2 00:27:35.133 ************************************ 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.393 rmmod nvme_tcp 00:27:35.393 rmmod nvme_fabrics 00:27:35.393 rmmod nvme_keyring 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 784910 ']' 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 784910 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 784910 ']' 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 784910 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784910 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784910' 00:27:35.393 killing process with pid 784910 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 784910 00:27:35.393 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 784910 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.652 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.653 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.653 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.653 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.653 05:04:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.559 05:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.559 00:27:37.559 real 0m20.089s 00:27:37.559 user 0m49.327s 00:27:37.559 sys 0m9.486s 00:27:37.559 05:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.559 05:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:37.559 ************************************ 00:27:37.559 END TEST nvmf_target_disconnect 00:27:37.559 ************************************ 00:27:37.818 05:04:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:37.818 00:27:37.818 real 5m50.872s 00:27:37.819 user 10m37.040s 00:27:37.819 sys 1m56.944s 00:27:37.819 05:04:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.819 05:04:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.819 ************************************ 00:27:37.819 END TEST nvmf_host 00:27:37.819 ************************************ 00:27:37.819 05:04:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:37.819 05:04:28 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:37.819 05:04:28 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:37.819 05:04:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:37.819 05:04:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.819 05:04:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.819 ************************************ 00:27:37.819 START TEST nvmf_target_core_interrupt_mode 00:27:37.819 ************************************ 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:37.819 * Looking for test storage... 00:27:37.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.819 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.079 --rc genhtml_branch_coverage=1 00:27:38.079 --rc genhtml_function_coverage=1 00:27:38.079 --rc genhtml_legend=1 00:27:38.079 --rc geninfo_all_blocks=1 00:27:38.079 --rc geninfo_unexecuted_blocks=1 00:27:38.079 00:27:38.079 ' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.079 --rc genhtml_branch_coverage=1 00:27:38.079 --rc genhtml_function_coverage=1 00:27:38.079 --rc genhtml_legend=1 00:27:38.079 --rc geninfo_all_blocks=1 00:27:38.079 --rc geninfo_unexecuted_blocks=1 00:27:38.079 00:27:38.079 ' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.079 --rc genhtml_branch_coverage=1 00:27:38.079 --rc genhtml_function_coverage=1 00:27:38.079 --rc genhtml_legend=1 00:27:38.079 --rc geninfo_all_blocks=1 00:27:38.079 --rc geninfo_unexecuted_blocks=1 00:27:38.079 00:27:38.079 ' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:38.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.079 --rc genhtml_branch_coverage=1 00:27:38.079 --rc genhtml_function_coverage=1 00:27:38.079 --rc genhtml_legend=1 00:27:38.079 --rc geninfo_all_blocks=1 00:27:38.079 --rc geninfo_unexecuted_blocks=1 00:27:38.079 00:27:38.079 ' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.079 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:38.080 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:38.080 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:38.080 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:38.080 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:38.080 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.080 05:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.080 ************************************ 00:27:38.080 START TEST nvmf_abort 00:27:38.080 ************************************ 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:38.080 * Looking for test storage... 00:27:38.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.080 --rc genhtml_branch_coverage=1 00:27:38.080 --rc genhtml_function_coverage=1 00:27:38.080 --rc genhtml_legend=1 00:27:38.080 --rc geninfo_all_blocks=1 00:27:38.080 --rc geninfo_unexecuted_blocks=1 00:27:38.080 00:27:38.080 ' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.080 --rc genhtml_branch_coverage=1 00:27:38.080 --rc genhtml_function_coverage=1 00:27:38.080 --rc genhtml_legend=1 00:27:38.080 --rc geninfo_all_blocks=1 00:27:38.080 --rc geninfo_unexecuted_blocks=1 00:27:38.080 00:27:38.080 ' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.080 --rc genhtml_branch_coverage=1 00:27:38.080 --rc genhtml_function_coverage=1 00:27:38.080 --rc genhtml_legend=1 00:27:38.080 --rc geninfo_all_blocks=1 00:27:38.080 --rc geninfo_unexecuted_blocks=1 00:27:38.080 00:27:38.080 ' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:38.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.080 --rc genhtml_branch_coverage=1 00:27:38.080 --rc genhtml_function_coverage=1 00:27:38.080 --rc genhtml_legend=1 00:27:38.080 --rc geninfo_all_blocks=1 00:27:38.080 --rc geninfo_unexecuted_blocks=1 00:27:38.080 00:27:38.080 ' 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.080 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.340 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.341 05:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.912 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:44.912 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:44.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:44.913 Found net devices under 0000:af:00.0: cvl_0_0 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:44.913 Found net devices under 0000:af:00.1: cvl_0_1 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.913 05:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:44.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:27:44.913 00:27:44.913 --- 10.0.0.2 ping statistics --- 00:27:44.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.913 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:44.913 00:27:44.913 --- 10.0.0.1 ping statistics --- 00:27:44.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.913 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=789363 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 789363 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 789363 ']' 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.913 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.913 [2024-12-10 05:04:35.133665] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:44.913 [2024-12-10 05:04:35.134629] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:44.913 [2024-12-10 05:04:35.134669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.913 [2024-12-10 05:04:35.214601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.913 [2024-12-10 05:04:35.254831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.913 [2024-12-10 05:04:35.254868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.913 [2024-12-10 05:04:35.254874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.913 [2024-12-10 05:04:35.254880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.914 [2024-12-10 05:04:35.254885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.914 [2024-12-10 05:04:35.256214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.914 [2024-12-10 05:04:35.256325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.914 [2024-12-10 05:04:35.256326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.914 [2024-12-10 05:04:35.324188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:44.914 [2024-12-10 05:04:35.324943] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:44.914 [2024-12-10 05:04:35.325056] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:44.914 [2024-12-10 05:04:35.325206] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 [2024-12-10 05:04:35.393099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 Malloc0 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 Delay0 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 [2024-12-10 05:04:35.481136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.914 05:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:44.914 [2024-12-10 05:04:35.612856] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:46.819 Initializing NVMe Controllers 00:27:46.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:46.819 controller IO queue size 128 less than required 00:27:46.819 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:46.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:46.819 Initialization complete. Launching workers. 00:27:46.819 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38038 00:27:46.819 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38095, failed to submit 66 00:27:46.819 success 38038, unsuccessful 57, failed 0 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:46.819 rmmod nvme_tcp 00:27:46.819 rmmod nvme_fabrics 00:27:46.819 rmmod nvme_keyring 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 789363 ']' 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 789363 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 789363 ']' 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 789363 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789363 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789363' 00:27:46.819 killing process with pid 789363 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 789363 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 789363 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.819 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:47.078 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:47.078 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:47.078 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.078 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.078 05:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.983 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:48.983 00:27:48.983 real 0m10.992s 00:27:48.983 user 0m10.181s 00:27:48.983 sys 0m5.596s 00:27:48.983 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.983 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:48.983 ************************************ 00:27:48.983 END TEST nvmf_abort 00:27:48.983 ************************************ 00:27:48.983 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:48.983 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.983 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.984 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:48.984 ************************************ 00:27:48.984 START TEST nvmf_ns_hotplug_stress 00:27:48.984 ************************************ 00:27:48.984 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:49.244 * Looking for test storage... 00:27:49.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:49.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.244 --rc genhtml_branch_coverage=1 00:27:49.244 --rc genhtml_function_coverage=1 00:27:49.244 --rc genhtml_legend=1 00:27:49.244 --rc geninfo_all_blocks=1 00:27:49.244 --rc geninfo_unexecuted_blocks=1 00:27:49.244 00:27:49.244 ' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:49.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.244 --rc genhtml_branch_coverage=1 00:27:49.244 --rc genhtml_function_coverage=1 00:27:49.244 --rc genhtml_legend=1 00:27:49.244 --rc geninfo_all_blocks=1 00:27:49.244 --rc geninfo_unexecuted_blocks=1 00:27:49.244 00:27:49.244 ' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:49.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.244 --rc genhtml_branch_coverage=1 00:27:49.244 --rc genhtml_function_coverage=1 00:27:49.244 --rc genhtml_legend=1 00:27:49.244 --rc geninfo_all_blocks=1 00:27:49.244 --rc geninfo_unexecuted_blocks=1 00:27:49.244 00:27:49.244 ' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:49.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.244 --rc genhtml_branch_coverage=1 00:27:49.244 --rc genhtml_function_coverage=1 00:27:49.244 --rc genhtml_legend=1 00:27:49.244 --rc geninfo_all_blocks=1 00:27:49.244 --rc geninfo_unexecuted_blocks=1 00:27:49.244 00:27:49.244 ' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.244 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.245 05:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:55.818 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:55.818 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:55.818 Found net devices under 0000:af:00.0: cvl_0_0 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.818 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:55.819 Found net devices under 0000:af:00.1: cvl_0_1 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.819 05:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:55.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:27:55.819 00:27:55.819 --- 10.0.0.2 ping statistics --- 00:27:55.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.819 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:27:55.819 00:27:55.819 --- 10.0.0.1 ping statistics --- 00:27:55.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.819 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=793288 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 793288 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 793288 ']' 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:55.819 [2024-12-10 05:04:46.219663] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:55.819 [2024-12-10 05:04:46.220609] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:27:55.819 [2024-12-10 05:04:46.220649] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.819 [2024-12-10 05:04:46.299519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:55.819 [2024-12-10 05:04:46.339265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.819 [2024-12-10 05:04:46.339299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.819 [2024-12-10 05:04:46.339307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.819 [2024-12-10 05:04:46.339313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.819 [2024-12-10 05:04:46.339318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.819 [2024-12-10 05:04:46.340632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.819 [2024-12-10 05:04:46.340744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.819 [2024-12-10 05:04:46.340746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.819 [2024-12-10 05:04:46.408374] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:55.819 [2024-12-10 05:04:46.409143] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:55.819 [2024-12-10 05:04:46.409372] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:55.819 [2024-12-10 05:04:46.409475] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:55.819 [2024-12-10 05:04:46.649590] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:55.819 05:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.078 [2024-12-10 05:04:47.041919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.079 05:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:56.337 05:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:56.337 Malloc0 00:27:56.337 05:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:56.596 Delay0 00:27:56.596 05:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.855 05:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:57.114 NULL1 00:27:57.114 05:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:57.114 05:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=793679 00:27:57.114 05:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:57.114 05:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:27:57.114 05:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.491 Read completed with error (sct=0, sc=11) 00:27:58.491 05:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.750 05:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:58.750 05:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:58.750 true 00:27:59.008 05:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:27:59.008 05:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.575 05:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.834 05:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:59.834 05:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:00.093 true 00:28:00.093 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:00.093 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.352 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.610 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:00.610 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:00.610 true 00:28:00.610 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:00.610 05:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.987 05:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.987 05:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:01.987 05:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:01.987 true 00:28:02.246 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:02.246 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.246 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.504 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:02.504 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:02.763 true 00:28:02.763 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:02.763 05:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.809 05:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.069 05:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:04.069 05:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:04.329 true 00:28:04.329 05:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:04.329 05:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.266 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.266 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:05.266 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:05.525 true 00:28:05.525 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:05.525 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.784 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.042 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:06.042 05:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:06.042 true 00:28:06.042 05:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:06.042 05:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.420 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.420 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:07.420 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:07.420 true 00:28:07.678 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:07.678 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.678 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.937 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:07.937 05:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:08.195 true 00:28:08.195 05:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:08.195 05:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.131 05:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.390 05:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:09.390 05:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:09.649 true 00:28:09.649 05:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:09.649 05:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.586 05:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.586 05:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:10.586 05:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:10.845 true 00:28:10.845 05:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:10.845 05:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.104 05:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.363 05:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:11.363 05:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:11.363 true 00:28:11.363 05:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:11.363 05:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.741 05:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.741 05:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:12.741 05:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:12.999 true 00:28:12.999 05:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:12.999 05:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.999 05:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.259 05:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:13.259 05:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:13.518 true 00:28:13.518 05:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:13.518 05:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.713 05:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.713 05:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:14.713 05:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:14.972 true 00:28:14.972 05:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:14.972 05:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.231 05:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.490 05:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:15.490 05:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:15.490 true 00:28:15.490 05:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:15.490 05:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.868 05:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.127 05:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:17.127 05:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:17.127 true 00:28:17.386 05:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:17.386 05:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.953 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.212 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:18.212 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:18.471 true 00:28:18.471 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:18.471 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.730 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.730 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:18.730 05:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:18.989 true 00:28:18.989 05:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:18.989 05:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.367 05:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.367 05:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:20.367 05:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:20.625 true 00:28:20.625 05:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:20.625 05:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.562 05:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.562 05:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:21.562 05:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:21.820 true 00:28:21.820 05:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:21.820 05:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.079 05:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.079 05:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:22.079 05:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:22.338 true 00:28:22.338 05:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:22.338 05:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 05:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.535 05:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:23.535 05:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:23.794 true 00:28:23.794 05:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:23.794 05:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.730 05:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.730 05:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:24.730 05:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:24.990 true 00:28:24.990 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:24.990 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.248 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.507 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:25.507 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:25.507 true 00:28:25.766 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:25.766 05:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.703 05:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.961 05:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:26.961 05:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:27.220 true 00:28:27.220 05:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:27.220 05:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.157 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.157 Initializing NVMe Controllers 00:28:28.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.157 Controller IO queue size 128, less than required. 00:28:28.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.158 Controller IO queue size 128, less than required. 00:28:28.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.158 Initialization complete. Launching workers. 00:28:28.158 ======================================================== 00:28:28.158 Latency(us) 00:28:28.158 Device Information : IOPS MiB/s Average min max 00:28:28.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2028.80 0.99 43362.59 1872.93 1012305.56 00:28:28.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17945.37 8.76 7132.84 1580.74 442869.71 00:28:28.158 ======================================================== 00:28:28.158 Total : 19974.17 9.75 10812.74 1580.74 1012305.56 00:28:28.158 00:28:28.158 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:28.158 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:28.417 true 00:28:28.417 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 793679 00:28:28.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (793679) - No such process 00:28:28.417 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 793679 00:28:28.417 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.676 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:28.676 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:28.676 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:28.676 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:28.676 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.676 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:28.935 null0 00:28:28.935 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:28.935 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.935 05:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:29.194 null1 00:28:29.194 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.194 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.194 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:29.453 null2 00:28:29.453 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.453 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.453 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:29.453 null3 00:28:29.453 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.453 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.453 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:29.713 null4 00:28:29.713 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.713 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.713 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:29.972 null5 00:28:29.972 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.972 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.972 05:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:29.972 null6 00:28:29.972 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.972 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.972 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:30.232 null7 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 798956 798958 798959 798961 798963 798965 798967 798968 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.232 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:30.492 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.751 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.752 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:30.752 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.011 05:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.011 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.012 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.012 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.012 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.012 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.271 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.272 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.531 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.790 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.049 05:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.049 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.309 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.569 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.829 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.089 05:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.089 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.348 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.608 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.867 05:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.125 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.126 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.384 rmmod nvme_tcp 00:28:34.384 rmmod nvme_fabrics 00:28:34.384 rmmod nvme_keyring 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 793288 ']' 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 793288 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 793288 ']' 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 793288 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.384 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793288 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793288' 00:28:34.643 killing process with pid 793288 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 793288 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 793288 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.643 05:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.178 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.179 00:28:37.179 real 0m47.702s 00:28:37.179 user 2m58.676s 00:28:37.179 sys 0m19.644s 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:37.179 ************************************ 00:28:37.179 END TEST nvmf_ns_hotplug_stress 00:28:37.179 ************************************ 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.179 ************************************ 00:28:37.179 START TEST nvmf_delete_subsystem 00:28:37.179 ************************************ 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:37.179 * Looking for test storage... 00:28:37.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.179 05:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.179 --rc genhtml_branch_coverage=1 00:28:37.179 --rc genhtml_function_coverage=1 00:28:37.179 --rc genhtml_legend=1 00:28:37.179 --rc geninfo_all_blocks=1 00:28:37.179 --rc geninfo_unexecuted_blocks=1 00:28:37.179 00:28:37.179 ' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.179 --rc genhtml_branch_coverage=1 00:28:37.179 --rc genhtml_function_coverage=1 00:28:37.179 --rc genhtml_legend=1 00:28:37.179 --rc geninfo_all_blocks=1 00:28:37.179 --rc geninfo_unexecuted_blocks=1 00:28:37.179 00:28:37.179 ' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.179 --rc genhtml_branch_coverage=1 00:28:37.179 --rc genhtml_function_coverage=1 00:28:37.179 --rc genhtml_legend=1 00:28:37.179 --rc geninfo_all_blocks=1 00:28:37.179 --rc geninfo_unexecuted_blocks=1 00:28:37.179 00:28:37.179 ' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.179 --rc genhtml_branch_coverage=1 00:28:37.179 --rc genhtml_function_coverage=1 00:28:37.179 --rc genhtml_legend=1 00:28:37.179 --rc geninfo_all_blocks=1 00:28:37.179 --rc geninfo_unexecuted_blocks=1 00:28:37.179 00:28:37.179 ' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.179 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.180 05:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.749 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.749 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.749 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.749 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:28:43.750 00:28:43.750 --- 10.0.0.2 ping statistics --- 00:28:43.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.750 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:28:43.750 00:28:43.750 --- 10.0.0.1 ping statistics --- 00:28:43.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.750 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.750 05:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=803252 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 803252 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 803252 ']' 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 [2024-12-10 05:05:34.073996] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:43.750 [2024-12-10 05:05:34.074903] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:43.750 [2024-12-10 05:05:34.074939] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.750 [2024-12-10 05:05:34.152440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:43.750 [2024-12-10 05:05:34.192033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.750 [2024-12-10 05:05:34.192067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.750 [2024-12-10 05:05:34.192074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.750 [2024-12-10 05:05:34.192080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.750 [2024-12-10 05:05:34.192086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.750 [2024-12-10 05:05:34.193120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.750 [2024-12-10 05:05:34.193121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.750 [2024-12-10 05:05:34.260993] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:43.750 [2024-12-10 05:05:34.261501] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:43.750 [2024-12-10 05:05:34.261753] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 [2024-12-10 05:05:34.333910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 [2024-12-10 05:05:34.366251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 NULL1 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 Delay0 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=803273 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:43.750 05:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:43.750 [2024-12-10 05:05:34.480562] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:45.653 05:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.654 05:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.654 05:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 starting I/O failed: -6 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 [2024-12-10 05:05:36.609597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f904c00d490 is same with the state(6) to be set 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Write completed with error (sct=0, sc=8) 00:28:45.654 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:45.655 Write completed with error (sct=0, sc=8) 00:28:45.655 Read completed with error (sct=0, sc=8) 00:28:46.592 [2024-12-10 05:05:37.575057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e9b0 is same with the state(6) to be set 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 [2024-12-10 05:05:37.611961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7db40 is same with the state(6) to be set 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 [2024-12-10 05:05:37.612075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f904c00d7c0 is same with the state(6) to be set 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 [2024-12-10 05:05:37.612238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f904c00d020 is same with the state(6) to be set 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Write completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.592 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Write completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Write completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Write completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 Write completed with error (sct=0, sc=8) 00:28:46.593 Read completed with error (sct=0, sc=8) 00:28:46.593 [2024-12-10 05:05:37.613157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7d780 is same with the state(6) to be set 00:28:46.593 Initializing NVMe Controllers 00:28:46.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.593 Controller IO queue size 128, less than required. 00:28:46.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:46.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:46.593 Initialization complete. Launching workers. 00:28:46.593 ======================================================== 00:28:46.593 Latency(us) 00:28:46.593 Device Information : IOPS MiB/s Average min max 00:28:46.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.02 0.09 896837.92 346.91 1013067.79 00:28:46.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.72 0.08 911254.86 255.72 1013541.35 00:28:46.593 ======================================================== 00:28:46.593 Total : 350.74 0.17 903526.40 255.72 1013541.35 00:28:46.593 00:28:46.593 [2024-12-10 05:05:37.613665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7e9b0 (9): Bad file descriptor 00:28:46.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:46.593 05:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.593 05:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:46.593 05:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 803273 00:28:46.593 05:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 803273 00:28:47.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (803273) - No such process 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 803273 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 803273 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 803273 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.162 [2024-12-10 05:05:38.146122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=803941 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:47.162 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:47.162 [2024-12-10 05:05:38.231470] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:47.730 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.730 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:47.730 05:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.298 05:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.298 05:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:48.298 05:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.557 05:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.557 05:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:48.557 05:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.124 05:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.124 05:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:49.124 05:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.692 05:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.692 05:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:49.692 05:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:50.260 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.260 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:50.260 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:50.260 Initializing NVMe Controllers 00:28:50.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.260 Controller IO queue size 128, less than required. 00:28:50.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:50.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:50.260 Initialization complete. Launching workers. 00:28:50.260 ======================================================== 00:28:50.260 Latency(us) 00:28:50.260 Device Information : IOPS MiB/s Average min max 00:28:50.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002378.37 1000183.85 1006727.15 00:28:50.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004048.86 1000318.40 1010507.71 00:28:50.260 ======================================================== 00:28:50.260 Total : 256.00 0.12 1003213.62 1000183.85 1010507.71 00:28:50.260 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 803941 00:28:50.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (803941) - No such process 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 803941 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.828 rmmod nvme_tcp 00:28:50.828 rmmod nvme_fabrics 00:28:50.828 rmmod nvme_keyring 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 803252 ']' 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 803252 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 803252 ']' 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 803252 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 803252 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 803252' 00:28:50.828 killing process with pid 803252 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 803252 00:28:50.828 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 803252 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.087 05:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.993 00:28:52.993 real 0m16.166s 00:28:52.993 user 0m26.010s 00:28:52.993 sys 0m5.992s 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:52.993 ************************************ 00:28:52.993 END TEST nvmf_delete_subsystem 00:28:52.993 ************************************ 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.993 ************************************ 00:28:52.993 START TEST nvmf_host_management 00:28:52.993 ************************************ 00:28:52.993 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:53.253 * Looking for test storage... 00:28:53.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.253 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:53.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.254 --rc genhtml_branch_coverage=1 00:28:53.254 --rc genhtml_function_coverage=1 00:28:53.254 --rc genhtml_legend=1 00:28:53.254 --rc geninfo_all_blocks=1 00:28:53.254 --rc geninfo_unexecuted_blocks=1 00:28:53.254 00:28:53.254 ' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:53.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.254 --rc genhtml_branch_coverage=1 00:28:53.254 --rc genhtml_function_coverage=1 00:28:53.254 --rc genhtml_legend=1 00:28:53.254 --rc geninfo_all_blocks=1 00:28:53.254 --rc geninfo_unexecuted_blocks=1 00:28:53.254 00:28:53.254 ' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:53.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.254 --rc genhtml_branch_coverage=1 00:28:53.254 --rc genhtml_function_coverage=1 00:28:53.254 --rc genhtml_legend=1 00:28:53.254 --rc geninfo_all_blocks=1 00:28:53.254 --rc geninfo_unexecuted_blocks=1 00:28:53.254 00:28:53.254 ' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:53.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.254 --rc genhtml_branch_coverage=1 00:28:53.254 --rc genhtml_function_coverage=1 00:28:53.254 --rc genhtml_legend=1 00:28:53.254 --rc geninfo_all_blocks=1 00:28:53.254 --rc geninfo_unexecuted_blocks=1 00:28:53.254 00:28:53.254 ' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.254 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.255 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.255 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.255 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.255 05:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:59.931 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:59.931 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:59.931 Found net devices under 0000:af:00.0: cvl_0_0 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:59.931 Found net devices under 0000:af:00.1: cvl_0_1 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.931 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.932 05:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:28:59.932 00:28:59.932 --- 10.0.0.2 ping statistics --- 00:28:59.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.932 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:59.932 00:28:59.932 --- 10.0.0.1 ping statistics --- 00:28:59.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.932 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=807872 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 807872 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 807872 ']' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 [2024-12-10 05:05:50.227659] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.932 [2024-12-10 05:05:50.228566] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:59.932 [2024-12-10 05:05:50.228598] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.932 [2024-12-10 05:05:50.307077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.932 [2024-12-10 05:05:50.349522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.932 [2024-12-10 05:05:50.349555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.932 [2024-12-10 05:05:50.349562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.932 [2024-12-10 05:05:50.349568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.932 [2024-12-10 05:05:50.349573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.932 [2024-12-10 05:05:50.351050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.932 [2024-12-10 05:05:50.354183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.932 [2024-12-10 05:05:50.354285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.932 [2024-12-10 05:05:50.354286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.932 [2024-12-10 05:05:50.422094] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.932 [2024-12-10 05:05:50.422471] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:59.932 [2024-12-10 05:05:50.422922] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:59.932 [2024-12-10 05:05:50.422994] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.932 [2024-12-10 05:05:50.423087] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 [2024-12-10 05:05:50.494973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 Malloc0 00:28:59.932 [2024-12-10 05:05:50.579243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=808070 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 808070 /var/tmp/bdevperf.sock 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 808070 ']' 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:59.932 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.933 { 00:28:59.933 "params": { 00:28:59.933 "name": "Nvme$subsystem", 00:28:59.933 "trtype": "$TEST_TRANSPORT", 00:28:59.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.933 "adrfam": "ipv4", 00:28:59.933 "trsvcid": "$NVMF_PORT", 00:28:59.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.933 "hdgst": ${hdgst:-false}, 00:28:59.933 "ddgst": ${ddgst:-false} 00:28:59.933 }, 00:28:59.933 "method": "bdev_nvme_attach_controller" 00:28:59.933 } 00:28:59.933 EOF 00:28:59.933 )") 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:59.933 05:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.933 "params": { 00:28:59.933 "name": "Nvme0", 00:28:59.933 "trtype": "tcp", 00:28:59.933 "traddr": "10.0.0.2", 00:28:59.933 "adrfam": "ipv4", 00:28:59.933 "trsvcid": "4420", 00:28:59.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.933 "hdgst": false, 00:28:59.933 "ddgst": false 00:28:59.933 }, 00:28:59.933 "method": "bdev_nvme_attach_controller" 00:28:59.933 }' 00:28:59.933 [2024-12-10 05:05:50.673563] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:28:59.933 [2024-12-10 05:05:50.673614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808070 ] 00:28:59.933 [2024-12-10 05:05:50.749504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.933 [2024-12-10 05:05:50.789131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.192 Running I/O for 10 seconds... 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:29:00.192 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.453 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:00.453 [2024-12-10 05:05:51.474751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.453 [2024-12-10 05:05:51.474964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.474969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.474975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.474981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.474987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.474993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.474999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c830 is same with the state(6) to be set 00:29:00.454 [2024-12-10 05:05:51.475253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.454 [2024-12-10 05:05:51.475638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.454 [2024-12-10 05:05:51.475644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.475991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.475999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.455 [2024-12-10 05:05:51.476144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.455 [2024-12-10 05:05:51.476150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.476158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.456 [2024-12-10 05:05:51.476165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.476178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.456 [2024-12-10 05:05:51.476184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.476192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.456 [2024-12-10 05:05:51.476199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.476210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.456 [2024-12-10 05:05:51.476216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.476224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.456 [2024-12-10 05:05:51.476231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.476238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d720 is same with the state(6) to be set 00:29:00.456 [2024-12-10 05:05:51.477207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:00.456 task offset: 98304 on job bdev=Nvme0n1 fails 00:29:00.456 00:29:00.456 Latency(us) 00:29:00.456 [2024-12-10T04:05:51.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.456 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:00.456 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:00.456 Verification LBA range: start 0x0 length 0x400 00:29:00.456 Nvme0n1 : 0.41 1889.44 118.09 157.45 0.00 30368.79 3760.52 28835.84 00:29:00.456 [2024-12-10T04:05:51.593Z] =================================================================================================================== 00:29:00.456 [2024-12-10T04:05:51.593Z] Total : 1889.44 118.09 157.45 0.00 30368.79 3760.52 28835.84 00:29:00.456 [2024-12-10 05:05:51.479611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:00.456 [2024-12-10 05:05:51.479632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554760 (9): Bad file descriptor 00:29:00.456 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.456 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:00.456 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.456 [2024-12-10 05:05:51.480650] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:00.456 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:00.456 [2024-12-10 05:05:51.480725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:00.456 [2024-12-10 05:05:51.480750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.456 [2024-12-10 05:05:51.480762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:00.456 [2024-12-10 05:05:51.480769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:00.456 [2024-12-10 05:05:51.480775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.456 [2024-12-10 05:05:51.480782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1554760 00:29:00.456 [2024-12-10 05:05:51.480802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554760 (9): Bad file descriptor 00:29:00.456 [2024-12-10 05:05:51.480813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:00.456 [2024-12-10 05:05:51.480820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:00.456 [2024-12-10 05:05:51.480828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:00.456 [2024-12-10 05:05:51.480836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:00.456 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.456 05:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:01.390 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 808070 00:29:01.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (808070) - No such process 00:29:01.390 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.391 { 00:29:01.391 "params": { 00:29:01.391 "name": "Nvme$subsystem", 00:29:01.391 "trtype": "$TEST_TRANSPORT", 00:29:01.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.391 "adrfam": "ipv4", 00:29:01.391 "trsvcid": "$NVMF_PORT", 00:29:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.391 "hdgst": ${hdgst:-false}, 00:29:01.391 "ddgst": ${ddgst:-false} 00:29:01.391 }, 00:29:01.391 "method": "bdev_nvme_attach_controller" 00:29:01.391 } 00:29:01.391 EOF 00:29:01.391 )") 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:01.391 05:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.391 "params": { 00:29:01.391 "name": "Nvme0", 00:29:01.391 "trtype": "tcp", 00:29:01.391 "traddr": "10.0.0.2", 00:29:01.391 "adrfam": "ipv4", 00:29:01.391 "trsvcid": "4420", 00:29:01.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.391 "hdgst": false, 00:29:01.391 "ddgst": false 00:29:01.391 }, 00:29:01.391 "method": "bdev_nvme_attach_controller" 00:29:01.391 }' 00:29:01.649 [2024-12-10 05:05:52.544272] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:01.649 [2024-12-10 05:05:52.544322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808367 ] 00:29:01.649 [2024-12-10 05:05:52.618039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.649 [2024-12-10 05:05:52.656580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.908 Running I/O for 1 seconds... 00:29:02.844 2017.00 IOPS, 126.06 MiB/s 00:29:02.844 Latency(us) 00:29:02.844 [2024-12-10T04:05:53.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.844 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.844 Verification LBA range: start 0x0 length 0x400 00:29:02.844 Nvme0n1 : 1.02 2049.28 128.08 0.00 0.00 30632.30 2793.08 27213.04 00:29:02.844 [2024-12-10T04:05:53.981Z] =================================================================================================================== 00:29:02.844 [2024-12-10T04:05:53.981Z] Total : 2049.28 128.08 0.00 0.00 30632.30 2793.08 27213.04 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.103 rmmod nvme_tcp 00:29:03.103 rmmod nvme_fabrics 00:29:03.103 rmmod nvme_keyring 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 807872 ']' 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 807872 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 807872 ']' 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 807872 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 807872 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 807872' 00:29:03.103 killing process with pid 807872 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 807872 00:29:03.103 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 807872 00:29:03.363 [2024-12-10 05:05:54.380668] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.363 05:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:05.900 00:29:05.900 real 0m12.373s 00:29:05.900 user 0m18.256s 00:29:05.900 sys 0m6.356s 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:05.900 ************************************ 00:29:05.900 END TEST nvmf_host_management 00:29:05.900 ************************************ 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.900 ************************************ 00:29:05.900 START TEST nvmf_lvol 00:29:05.900 ************************************ 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:05.900 * Looking for test storage... 00:29:05.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.900 --rc genhtml_branch_coverage=1 00:29:05.900 --rc genhtml_function_coverage=1 00:29:05.900 --rc genhtml_legend=1 00:29:05.900 --rc geninfo_all_blocks=1 00:29:05.900 --rc geninfo_unexecuted_blocks=1 00:29:05.900 00:29:05.900 ' 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.900 --rc genhtml_branch_coverage=1 00:29:05.900 --rc genhtml_function_coverage=1 00:29:05.900 --rc genhtml_legend=1 00:29:05.900 --rc geninfo_all_blocks=1 00:29:05.900 --rc geninfo_unexecuted_blocks=1 00:29:05.900 00:29:05.900 ' 00:29:05.900 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.900 --rc genhtml_branch_coverage=1 00:29:05.900 --rc genhtml_function_coverage=1 00:29:05.900 --rc genhtml_legend=1 00:29:05.900 --rc geninfo_all_blocks=1 00:29:05.901 --rc geninfo_unexecuted_blocks=1 00:29:05.901 00:29:05.901 ' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.901 --rc genhtml_branch_coverage=1 00:29:05.901 --rc genhtml_function_coverage=1 00:29:05.901 --rc genhtml_legend=1 00:29:05.901 --rc geninfo_all_blocks=1 00:29:05.901 --rc geninfo_unexecuted_blocks=1 00:29:05.901 00:29:05.901 ' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.901 05:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:12.471 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:12.472 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:12.472 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:12.472 Found net devices under 0000:af:00.0: cvl_0_0 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:12.472 Found net devices under 0000:af:00.1: cvl_0_1 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:29:12.472 00:29:12.472 --- 10.0.0.2 ping statistics --- 00:29:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.472 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:12.472 00:29:12.472 --- 10.0.0.1 ping statistics --- 00:29:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.472 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.472 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=812186 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 812186 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 812186 ']' 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 [2024-12-10 05:06:02.700093] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.473 [2024-12-10 05:06:02.700984] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:12.473 [2024-12-10 05:06:02.701017] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.473 [2024-12-10 05:06:02.777396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.473 [2024-12-10 05:06:02.817389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.473 [2024-12-10 05:06:02.817423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.473 [2024-12-10 05:06:02.817430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.473 [2024-12-10 05:06:02.817436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.473 [2024-12-10 05:06:02.817441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.473 [2024-12-10 05:06:02.818706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.473 [2024-12-10 05:06:02.818811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.473 [2024-12-10 05:06:02.818813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.473 [2024-12-10 05:06:02.886421] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.473 [2024-12-10 05:06:02.887158] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:12.473 [2024-12-10 05:06:02.887253] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.473 [2024-12-10 05:06:02.887387] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.473 05:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:12.473 [2024-12-10 05:06:03.123489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.473 05:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:12.473 05:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:12.473 05:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:12.473 05:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:12.473 05:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:12.731 05:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:12.990 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e44fe6ee-0577-47b8-95c3-0b9c587b2b4d 00:29:12.990 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e44fe6ee-0577-47b8-95c3-0b9c587b2b4d lvol 20 00:29:13.250 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=390264d4-a7fe-4417-ae49-60e38b041d1a 00:29:13.250 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:13.250 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 390264d4-a7fe-4417-ae49-60e38b041d1a 00:29:13.509 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:13.768 [2024-12-10 05:06:04.727365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.768 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:14.026 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=812641 00:29:14.026 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:14.026 05:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:14.962 05:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 390264d4-a7fe-4417-ae49-60e38b041d1a MY_SNAPSHOT 00:29:15.221 05:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f90c3932-00e9-49e2-8174-0f66edc787c2 00:29:15.221 05:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 390264d4-a7fe-4417-ae49-60e38b041d1a 30 00:29:15.479 05:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f90c3932-00e9-49e2-8174-0f66edc787c2 MY_CLONE 00:29:15.738 05:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=959703ae-49c6-4fd2-99ce-12b626501d37 00:29:15.738 05:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 959703ae-49c6-4fd2-99ce-12b626501d37 00:29:16.306 05:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 812641 00:29:24.425 Initializing NVMe Controllers 00:29:24.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:24.425 Controller IO queue size 128, less than required. 00:29:24.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:24.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:24.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:24.425 Initialization complete. Launching workers. 00:29:24.425 ======================================================== 00:29:24.425 Latency(us) 00:29:24.425 Device Information : IOPS MiB/s Average min max 00:29:24.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12030.10 46.99 10643.50 3323.42 52301.38 00:29:24.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12112.60 47.31 10572.95 3368.09 62470.53 00:29:24.425 ======================================================== 00:29:24.425 Total : 24142.70 94.31 10608.10 3323.42 62470.53 00:29:24.425 00:29:24.425 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:24.425 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 390264d4-a7fe-4417-ae49-60e38b041d1a 00:29:24.698 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e44fe6ee-0577-47b8-95c3-0b9c587b2b4d 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.964 rmmod nvme_tcp 00:29:24.964 rmmod nvme_fabrics 00:29:24.964 rmmod nvme_keyring 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 812186 ']' 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 812186 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 812186 ']' 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 812186 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.964 05:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 812186 00:29:24.964 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.964 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.964 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 812186' 00:29:24.964 killing process with pid 812186 00:29:24.964 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 812186 00:29:24.964 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 812186 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.223 05:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.760 00:29:27.760 real 0m21.741s 00:29:27.760 user 0m55.389s 00:29:27.760 sys 0m9.809s 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:27.760 ************************************ 00:29:27.760 END TEST nvmf_lvol 00:29:27.760 ************************************ 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:27.760 ************************************ 00:29:27.760 START TEST nvmf_lvs_grow 00:29:27.760 ************************************ 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:27.760 * Looking for test storage... 00:29:27.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:27.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.760 --rc genhtml_branch_coverage=1 00:29:27.760 --rc genhtml_function_coverage=1 00:29:27.760 --rc genhtml_legend=1 00:29:27.760 --rc geninfo_all_blocks=1 00:29:27.760 --rc geninfo_unexecuted_blocks=1 00:29:27.760 00:29:27.760 ' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:27.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.760 --rc genhtml_branch_coverage=1 00:29:27.760 --rc genhtml_function_coverage=1 00:29:27.760 --rc genhtml_legend=1 00:29:27.760 --rc geninfo_all_blocks=1 00:29:27.760 --rc geninfo_unexecuted_blocks=1 00:29:27.760 00:29:27.760 ' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:27.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.760 --rc genhtml_branch_coverage=1 00:29:27.760 --rc genhtml_function_coverage=1 00:29:27.760 --rc genhtml_legend=1 00:29:27.760 --rc geninfo_all_blocks=1 00:29:27.760 --rc geninfo_unexecuted_blocks=1 00:29:27.760 00:29:27.760 ' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:27.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.760 --rc genhtml_branch_coverage=1 00:29:27.760 --rc genhtml_function_coverage=1 00:29:27.760 --rc genhtml_legend=1 00:29:27.760 --rc geninfo_all_blocks=1 00:29:27.760 --rc geninfo_unexecuted_blocks=1 00:29:27.760 00:29:27.760 ' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.760 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.761 05:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:33.038 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:33.038 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.038 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:33.039 Found net devices under 0000:af:00.0: cvl_0_0 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:33.039 Found net devices under 0000:af:00.1: cvl_0_1 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.039 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.298 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.298 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.298 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.298 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.298 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.298 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:29:33.299 00:29:33.299 --- 10.0.0.2 ping statistics --- 00:29:33.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.299 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:29:33.299 00:29:33.299 --- 10.0.0.1 ping statistics --- 00:29:33.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.299 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.299 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=818113 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 818113 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 818113 ']' 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.558 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:33.558 [2024-12-10 05:06:24.514788] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:33.558 [2024-12-10 05:06:24.515688] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:33.558 [2024-12-10 05:06:24.515722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.558 [2024-12-10 05:06:24.593245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.558 [2024-12-10 05:06:24.633646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.558 [2024-12-10 05:06:24.633680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.558 [2024-12-10 05:06:24.633687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.558 [2024-12-10 05:06:24.633694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.558 [2024-12-10 05:06:24.633699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.558 [2024-12-10 05:06:24.634182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.818 [2024-12-10 05:06:24.702174] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:33.818 [2024-12-10 05:06:24.702388] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.818 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:33.818 [2024-12-10 05:06:24.942818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:34.077 ************************************ 00:29:34.077 START TEST lvs_grow_clean 00:29:34.077 ************************************ 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:34.077 05:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:34.077 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:34.336 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:34.336 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:34.336 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:34.336 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:34.336 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:34.595 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:34.595 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:34.595 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d lvol 150 00:29:34.854 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=58d62a4d-be7b-4727-a90f-4588061871b4 00:29:34.854 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:34.854 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:34.854 [2024-12-10 05:06:25.966576] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:34.854 [2024-12-10 05:06:25.966707] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:34.854 true 00:29:34.854 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:34.854 05:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:35.113 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:35.113 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:35.373 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58d62a4d-be7b-4727-a90f-4588061871b4 00:29:35.631 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.631 [2024-12-10 05:06:26.695001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.631 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=818575 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 818575 /var/tmp/bdevperf.sock 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 818575 ']' 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.890 05:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.890 [2024-12-10 05:06:26.948952] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:35.890 [2024-12-10 05:06:26.948998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818575 ] 00:29:35.890 [2024-12-10 05:06:27.022109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.149 [2024-12-10 05:06:27.063625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.149 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.149 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:36.149 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:36.408 Nvme0n1 00:29:36.408 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:36.667 [ 00:29:36.667 { 00:29:36.667 "name": "Nvme0n1", 00:29:36.667 "aliases": [ 00:29:36.667 "58d62a4d-be7b-4727-a90f-4588061871b4" 00:29:36.667 ], 00:29:36.667 "product_name": "NVMe disk", 00:29:36.667 "block_size": 4096, 00:29:36.667 "num_blocks": 38912, 00:29:36.667 "uuid": "58d62a4d-be7b-4727-a90f-4588061871b4", 00:29:36.667 "numa_id": 1, 00:29:36.667 "assigned_rate_limits": { 00:29:36.667 "rw_ios_per_sec": 0, 00:29:36.667 "rw_mbytes_per_sec": 0, 00:29:36.667 "r_mbytes_per_sec": 0, 00:29:36.667 "w_mbytes_per_sec": 0 00:29:36.667 }, 00:29:36.667 "claimed": false, 00:29:36.667 "zoned": false, 00:29:36.667 "supported_io_types": { 00:29:36.667 "read": true, 00:29:36.667 "write": true, 00:29:36.667 "unmap": true, 00:29:36.667 "flush": true, 00:29:36.667 "reset": true, 00:29:36.667 "nvme_admin": true, 00:29:36.667 "nvme_io": true, 00:29:36.667 "nvme_io_md": false, 00:29:36.667 "write_zeroes": true, 00:29:36.667 "zcopy": false, 00:29:36.667 "get_zone_info": false, 00:29:36.667 "zone_management": false, 00:29:36.667 "zone_append": false, 00:29:36.667 "compare": true, 00:29:36.667 "compare_and_write": true, 00:29:36.667 "abort": true, 00:29:36.667 "seek_hole": false, 00:29:36.667 "seek_data": false, 00:29:36.667 "copy": true, 00:29:36.667 "nvme_iov_md": false 00:29:36.667 }, 00:29:36.667 "memory_domains": [ 00:29:36.667 { 00:29:36.667 "dma_device_id": "system", 00:29:36.667 "dma_device_type": 1 00:29:36.667 } 00:29:36.667 ], 00:29:36.667 "driver_specific": { 00:29:36.667 "nvme": [ 00:29:36.667 { 00:29:36.667 "trid": { 00:29:36.667 "trtype": "TCP", 00:29:36.667 "adrfam": "IPv4", 00:29:36.667 "traddr": "10.0.0.2", 00:29:36.667 "trsvcid": "4420", 00:29:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:36.667 }, 00:29:36.667 "ctrlr_data": { 00:29:36.667 "cntlid": 1, 00:29:36.667 "vendor_id": "0x8086", 00:29:36.667 "model_number": "SPDK bdev Controller", 00:29:36.667 "serial_number": "SPDK0", 00:29:36.667 "firmware_revision": "25.01", 00:29:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.667 "oacs": { 00:29:36.667 "security": 0, 00:29:36.667 "format": 0, 00:29:36.667 "firmware": 0, 00:29:36.667 "ns_manage": 0 00:29:36.667 }, 00:29:36.667 "multi_ctrlr": true, 00:29:36.667 "ana_reporting": false 00:29:36.667 }, 00:29:36.667 "vs": { 00:29:36.667 "nvme_version": "1.3" 00:29:36.667 }, 00:29:36.667 "ns_data": { 00:29:36.667 "id": 1, 00:29:36.667 "can_share": true 00:29:36.667 } 00:29:36.667 } 00:29:36.667 ], 00:29:36.667 "mp_policy": "active_passive" 00:29:36.667 } 00:29:36.667 } 00:29:36.667 ] 00:29:36.667 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=818763 00:29:36.667 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:36.667 05:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:36.667 Running I/O for 10 seconds... 00:29:37.603 Latency(us) 00:29:37.603 [2024-12-10T04:06:28.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.603 Nvme0n1 : 1.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:29:37.603 [2024-12-10T04:06:28.740Z] =================================================================================================================== 00:29:37.603 [2024-12-10T04:06:28.740Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:29:37.603 00:29:38.541 05:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:38.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.800 Nvme0n1 : 2.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:29:38.800 [2024-12-10T04:06:29.937Z] =================================================================================================================== 00:29:38.800 [2024-12-10T04:06:29.937Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:29:38.800 00:29:38.800 true 00:29:38.800 05:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:38.800 05:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:39.058 05:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:39.059 05:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:39.059 05:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 818763 00:29:39.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.627 Nvme0n1 : 3.00 23389.33 91.36 0.00 0.00 0.00 0.00 0.00 00:29:39.627 [2024-12-10T04:06:30.764Z] =================================================================================================================== 00:29:39.627 [2024-12-10T04:06:30.764Z] Total : 23389.33 91.36 0.00 0.00 0.00 0.00 0.00 00:29:39.627 00:29:40.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.563 Nvme0n1 : 4.00 23491.50 91.76 0.00 0.00 0.00 0.00 0.00 00:29:40.563 [2024-12-10T04:06:31.700Z] =================================================================================================================== 00:29:40.563 [2024-12-10T04:06:31.700Z] Total : 23491.50 91.76 0.00 0.00 0.00 0.00 0.00 00:29:40.563 00:29:41.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.942 Nvme0n1 : 5.00 23587.80 92.14 0.00 0.00 0.00 0.00 0.00 00:29:41.942 [2024-12-10T04:06:33.079Z] =================================================================================================================== 00:29:41.942 [2024-12-10T04:06:33.079Z] Total : 23587.80 92.14 0.00 0.00 0.00 0.00 0.00 00:29:41.942 00:29:42.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.879 Nvme0n1 : 6.00 23657.00 92.41 0.00 0.00 0.00 0.00 0.00 00:29:42.879 [2024-12-10T04:06:34.016Z] =================================================================================================================== 00:29:42.879 [2024-12-10T04:06:34.016Z] Total : 23657.00 92.41 0.00 0.00 0.00 0.00 0.00 00:29:42.879 00:29:43.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.816 Nvme0n1 : 7.00 23706.43 92.60 0.00 0.00 0.00 0.00 0.00 00:29:43.816 [2024-12-10T04:06:34.953Z] =================================================================================================================== 00:29:43.816 [2024-12-10T04:06:34.953Z] Total : 23706.43 92.60 0.00 0.00 0.00 0.00 0.00 00:29:43.816 00:29:44.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.761 Nvme0n1 : 8.00 23759.38 92.81 0.00 0.00 0.00 0.00 0.00 00:29:44.761 [2024-12-10T04:06:35.898Z] =================================================================================================================== 00:29:44.761 [2024-12-10T04:06:35.898Z] Total : 23759.38 92.81 0.00 0.00 0.00 0.00 0.00 00:29:44.761 00:29:45.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.698 Nvme0n1 : 9.00 23786.44 92.92 0.00 0.00 0.00 0.00 0.00 00:29:45.698 [2024-12-10T04:06:36.835Z] =================================================================================================================== 00:29:45.698 [2024-12-10T04:06:36.835Z] Total : 23786.44 92.92 0.00 0.00 0.00 0.00 0.00 00:29:45.698 00:29:46.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.635 Nvme0n1 : 10.00 23814.50 93.03 0.00 0.00 0.00 0.00 0.00 00:29:46.635 [2024-12-10T04:06:37.772Z] =================================================================================================================== 00:29:46.635 [2024-12-10T04:06:37.772Z] Total : 23814.50 93.03 0.00 0.00 0.00 0.00 0.00 00:29:46.635 00:29:46.635 00:29:46.635 Latency(us) 00:29:46.635 [2024-12-10T04:06:37.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.635 Nvme0n1 : 10.00 23815.24 93.03 0.00 0.00 5371.53 3214.38 25340.59 00:29:46.635 [2024-12-10T04:06:37.772Z] =================================================================================================================== 00:29:46.635 [2024-12-10T04:06:37.772Z] Total : 23815.24 93.03 0.00 0.00 5371.53 3214.38 25340.59 00:29:46.635 { 00:29:46.635 "results": [ 00:29:46.635 { 00:29:46.635 "job": "Nvme0n1", 00:29:46.635 "core_mask": "0x2", 00:29:46.635 "workload": "randwrite", 00:29:46.635 "status": "finished", 00:29:46.635 "queue_depth": 128, 00:29:46.635 "io_size": 4096, 00:29:46.635 "runtime": 10.002378, 00:29:46.635 "iops": 23815.236736704013, 00:29:46.635 "mibps": 93.02826850275005, 00:29:46.635 "io_failed": 0, 00:29:46.635 "io_timeout": 0, 00:29:46.635 "avg_latency_us": 5371.531594716044, 00:29:46.635 "min_latency_us": 3214.384761904762, 00:29:46.635 "max_latency_us": 25340.586666666666 00:29:46.635 } 00:29:46.635 ], 00:29:46.635 "core_count": 1 00:29:46.635 } 00:29:46.635 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 818575 00:29:46.635 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 818575 ']' 00:29:46.635 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 818575 00:29:46.636 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:46.636 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.636 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818575 00:29:46.895 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.895 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.895 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818575' 00:29:46.895 killing process with pid 818575 00:29:46.895 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 818575 00:29:46.895 Received shutdown signal, test time was about 10.000000 seconds 00:29:46.895 00:29:46.895 Latency(us) 00:29:46.895 [2024-12-10T04:06:38.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.895 [2024-12-10T04:06:38.032Z] =================================================================================================================== 00:29:46.895 [2024-12-10T04:06:38.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.895 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 818575 00:29:46.895 05:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:47.153 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:47.412 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:47.412 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:47.412 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:47.412 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:47.412 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:47.671 [2024-12-10 05:06:38.678624] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:47.671 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:47.930 request: 00:29:47.931 { 00:29:47.931 "uuid": "9673a8cb-01c8-4f18-a0b7-48875e4b595d", 00:29:47.931 "method": "bdev_lvol_get_lvstores", 00:29:47.931 "req_id": 1 00:29:47.931 } 00:29:47.931 Got JSON-RPC error response 00:29:47.931 response: 00:29:47.931 { 00:29:47.931 "code": -19, 00:29:47.931 "message": "No such device" 00:29:47.931 } 00:29:47.931 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:47.931 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.931 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.931 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.931 05:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:48.190 aio_bdev 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 58d62a4d-be7b-4727-a90f-4588061871b4 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=58d62a4d-be7b-4727-a90f-4588061871b4 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:48.190 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58d62a4d-be7b-4727-a90f-4588061871b4 -t 2000 00:29:48.449 [ 00:29:48.449 { 00:29:48.449 "name": "58d62a4d-be7b-4727-a90f-4588061871b4", 00:29:48.449 "aliases": [ 00:29:48.449 "lvs/lvol" 00:29:48.449 ], 00:29:48.449 "product_name": "Logical Volume", 00:29:48.449 "block_size": 4096, 00:29:48.449 "num_blocks": 38912, 00:29:48.449 "uuid": "58d62a4d-be7b-4727-a90f-4588061871b4", 00:29:48.449 "assigned_rate_limits": { 00:29:48.449 "rw_ios_per_sec": 0, 00:29:48.449 "rw_mbytes_per_sec": 0, 00:29:48.449 "r_mbytes_per_sec": 0, 00:29:48.449 "w_mbytes_per_sec": 0 00:29:48.449 }, 00:29:48.449 "claimed": false, 00:29:48.449 "zoned": false, 00:29:48.449 "supported_io_types": { 00:29:48.449 "read": true, 00:29:48.449 "write": true, 00:29:48.449 "unmap": true, 00:29:48.449 "flush": false, 00:29:48.449 "reset": true, 00:29:48.449 "nvme_admin": false, 00:29:48.449 "nvme_io": false, 00:29:48.449 "nvme_io_md": false, 00:29:48.449 "write_zeroes": true, 00:29:48.449 "zcopy": false, 00:29:48.449 "get_zone_info": false, 00:29:48.449 "zone_management": false, 00:29:48.449 "zone_append": false, 00:29:48.449 "compare": false, 00:29:48.449 "compare_and_write": false, 00:29:48.449 "abort": false, 00:29:48.449 "seek_hole": true, 00:29:48.449 "seek_data": true, 00:29:48.449 "copy": false, 00:29:48.449 "nvme_iov_md": false 00:29:48.449 }, 00:29:48.449 "driver_specific": { 00:29:48.449 "lvol": { 00:29:48.449 "lvol_store_uuid": "9673a8cb-01c8-4f18-a0b7-48875e4b595d", 00:29:48.449 "base_bdev": "aio_bdev", 00:29:48.449 "thin_provision": false, 00:29:48.449 "num_allocated_clusters": 38, 00:29:48.449 "snapshot": false, 00:29:48.449 "clone": false, 00:29:48.449 "esnap_clone": false 00:29:48.449 } 00:29:48.449 } 00:29:48.449 } 00:29:48.449 ] 00:29:48.449 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:48.449 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:48.449 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:48.709 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:48.709 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:48.709 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:48.967 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:48.967 05:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58d62a4d-be7b-4727-a90f-4588061871b4 00:29:48.967 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9673a8cb-01c8-4f18-a0b7-48875e4b595d 00:29:49.233 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:49.568 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:49.568 00:29:49.568 real 0m15.519s 00:29:49.568 user 0m15.053s 00:29:49.569 sys 0m1.479s 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:49.569 ************************************ 00:29:49.569 END TEST lvs_grow_clean 00:29:49.569 ************************************ 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:49.569 ************************************ 00:29:49.569 START TEST lvs_grow_dirty 00:29:49.569 ************************************ 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:49.569 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:49.858 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:49.858 05:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:50.126 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=db1bcb17-cf58-410c-bad1-8c72d3000207 00:29:50.126 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:29:50.126 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:50.126 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:50.126 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:50.126 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db1bcb17-cf58-410c-bad1-8c72d3000207 lvol 150 00:29:50.387 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9ce379c-7857-47b5-8583-1ab66eb810a7 00:29:50.387 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:50.387 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:50.646 [2024-12-10 05:06:41.570579] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:50.646 [2024-12-10 05:06:41.570710] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:50.646 true 00:29:50.646 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:29:50.646 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:50.905 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:50.905 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:50.905 05:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9ce379c-7857-47b5-8583-1ab66eb810a7 00:29:51.164 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:51.423 [2024-12-10 05:06:42.314997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=821095 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 821095 /var/tmp/bdevperf.sock 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 821095 ']' 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.423 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.682 [2024-12-10 05:06:42.556863] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:29:51.682 [2024-12-10 05:06:42.556912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821095 ] 00:29:51.682 [2024-12-10 05:06:42.630756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.682 [2024-12-10 05:06:42.671051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.682 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.682 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:51.682 05:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:51.941 Nvme0n1 00:29:51.941 05:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:52.200 [ 00:29:52.200 { 00:29:52.200 "name": "Nvme0n1", 00:29:52.200 "aliases": [ 00:29:52.200 "a9ce379c-7857-47b5-8583-1ab66eb810a7" 00:29:52.200 ], 00:29:52.200 "product_name": "NVMe disk", 00:29:52.200 "block_size": 4096, 00:29:52.200 "num_blocks": 38912, 00:29:52.200 "uuid": "a9ce379c-7857-47b5-8583-1ab66eb810a7", 00:29:52.200 "numa_id": 1, 00:29:52.200 "assigned_rate_limits": { 00:29:52.200 "rw_ios_per_sec": 0, 00:29:52.200 "rw_mbytes_per_sec": 0, 00:29:52.200 "r_mbytes_per_sec": 0, 00:29:52.200 "w_mbytes_per_sec": 0 00:29:52.200 }, 00:29:52.200 "claimed": false, 00:29:52.200 "zoned": false, 00:29:52.200 "supported_io_types": { 00:29:52.200 "read": true, 00:29:52.200 "write": true, 00:29:52.200 "unmap": true, 00:29:52.200 "flush": true, 00:29:52.200 "reset": true, 00:29:52.200 "nvme_admin": true, 00:29:52.200 "nvme_io": true, 00:29:52.200 "nvme_io_md": false, 00:29:52.200 "write_zeroes": true, 00:29:52.200 "zcopy": false, 00:29:52.200 "get_zone_info": false, 00:29:52.200 "zone_management": false, 00:29:52.200 "zone_append": false, 00:29:52.200 "compare": true, 00:29:52.200 "compare_and_write": true, 00:29:52.200 "abort": true, 00:29:52.200 "seek_hole": false, 00:29:52.200 "seek_data": false, 00:29:52.200 "copy": true, 00:29:52.200 "nvme_iov_md": false 00:29:52.200 }, 00:29:52.200 "memory_domains": [ 00:29:52.200 { 00:29:52.200 "dma_device_id": "system", 00:29:52.200 "dma_device_type": 1 00:29:52.200 } 00:29:52.200 ], 00:29:52.200 "driver_specific": { 00:29:52.200 "nvme": [ 00:29:52.200 { 00:29:52.200 "trid": { 00:29:52.200 "trtype": "TCP", 00:29:52.200 "adrfam": "IPv4", 00:29:52.200 "traddr": "10.0.0.2", 00:29:52.200 "trsvcid": "4420", 00:29:52.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:52.200 }, 00:29:52.200 "ctrlr_data": { 00:29:52.200 "cntlid": 1, 00:29:52.200 "vendor_id": "0x8086", 00:29:52.200 "model_number": "SPDK bdev Controller", 00:29:52.200 "serial_number": "SPDK0", 00:29:52.200 "firmware_revision": "25.01", 00:29:52.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.200 "oacs": { 00:29:52.200 "security": 0, 00:29:52.200 "format": 0, 00:29:52.200 "firmware": 0, 00:29:52.200 "ns_manage": 0 00:29:52.200 }, 00:29:52.200 "multi_ctrlr": true, 00:29:52.200 "ana_reporting": false 00:29:52.200 }, 00:29:52.200 "vs": { 00:29:52.200 "nvme_version": "1.3" 00:29:52.200 }, 00:29:52.200 "ns_data": { 00:29:52.200 "id": 1, 00:29:52.200 "can_share": true 00:29:52.200 } 00:29:52.200 } 00:29:52.200 ], 00:29:52.200 "mp_policy": "active_passive" 00:29:52.200 } 00:29:52.200 } 00:29:52.200 ] 00:29:52.200 05:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=821299 00:29:52.200 05:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:52.200 05:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:52.200 Running I/O for 10 seconds... 00:29:53.579 Latency(us) 00:29:53.579 [2024-12-10T04:06:44.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.579 Nvme0n1 : 1.00 23178.00 90.54 0.00 0.00 0.00 0.00 0.00 00:29:53.579 [2024-12-10T04:06:44.716Z] =================================================================================================================== 00:29:53.579 [2024-12-10T04:06:44.716Z] Total : 23178.00 90.54 0.00 0.00 0.00 0.00 0.00 00:29:53.579 00:29:54.147 05:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:29:54.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.406 Nvme0n1 : 2.00 23385.00 91.35 0.00 0.00 0.00 0.00 0.00 00:29:54.406 [2024-12-10T04:06:45.543Z] =================================================================================================================== 00:29:54.406 [2024-12-10T04:06:45.543Z] Total : 23385.00 91.35 0.00 0.00 0.00 0.00 0.00 00:29:54.406 00:29:54.406 true 00:29:54.406 05:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:29:54.406 05:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:54.664 05:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:54.664 05:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:54.664 05:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 821299 00:29:55.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.232 Nvme0n1 : 3.00 23506.33 91.82 0.00 0.00 0.00 0.00 0.00 00:29:55.232 [2024-12-10T04:06:46.369Z] =================================================================================================================== 00:29:55.232 [2024-12-10T04:06:46.369Z] Total : 23506.33 91.82 0.00 0.00 0.00 0.00 0.00 00:29:55.232 00:29:56.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.168 Nvme0n1 : 4.00 23598.75 92.18 0.00 0.00 0.00 0.00 0.00 00:29:56.168 [2024-12-10T04:06:47.305Z] =================================================================================================================== 00:29:56.168 [2024-12-10T04:06:47.305Z] Total : 23598.75 92.18 0.00 0.00 0.00 0.00 0.00 00:29:56.168 00:29:57.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.545 Nvme0n1 : 5.00 23679.60 92.50 0.00 0.00 0.00 0.00 0.00 00:29:57.545 [2024-12-10T04:06:48.683Z] =================================================================================================================== 00:29:57.546 [2024-12-10T04:06:48.683Z] Total : 23679.60 92.50 0.00 0.00 0.00 0.00 0.00 00:29:57.546 00:29:58.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.482 Nvme0n1 : 6.00 23733.50 92.71 0.00 0.00 0.00 0.00 0.00 00:29:58.482 [2024-12-10T04:06:49.619Z] =================================================================================================================== 00:29:58.482 [2024-12-10T04:06:49.619Z] Total : 23733.50 92.71 0.00 0.00 0.00 0.00 0.00 00:29:58.482 00:29:59.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.418 Nvme0n1 : 7.00 23735.71 92.72 0.00 0.00 0.00 0.00 0.00 00:29:59.418 [2024-12-10T04:06:50.555Z] =================================================================================================================== 00:29:59.418 [2024-12-10T04:06:50.555Z] Total : 23735.71 92.72 0.00 0.00 0.00 0.00 0.00 00:29:59.418 00:30:00.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.355 Nvme0n1 : 8.00 23757.50 92.80 0.00 0.00 0.00 0.00 0.00 00:30:00.355 [2024-12-10T04:06:51.492Z] =================================================================================================================== 00:30:00.355 [2024-12-10T04:06:51.492Z] Total : 23757.50 92.80 0.00 0.00 0.00 0.00 0.00 00:30:00.355 00:30:01.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.291 Nvme0n1 : 9.00 23798.89 92.96 0.00 0.00 0.00 0.00 0.00 00:30:01.291 [2024-12-10T04:06:52.428Z] =================================================================================================================== 00:30:01.291 [2024-12-10T04:06:52.428Z] Total : 23798.89 92.96 0.00 0.00 0.00 0.00 0.00 00:30:01.291 00:30:02.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.229 Nvme0n1 : 10.00 23819.30 93.04 0.00 0.00 0.00 0.00 0.00 00:30:02.229 [2024-12-10T04:06:53.366Z] =================================================================================================================== 00:30:02.229 [2024-12-10T04:06:53.366Z] Total : 23819.30 93.04 0.00 0.00 0.00 0.00 0.00 00:30:02.229 00:30:02.229 00:30:02.229 Latency(us) 00:30:02.229 [2024-12-10T04:06:53.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.229 Nvme0n1 : 10.00 23821.03 93.05 0.00 0.00 5370.40 3245.59 26713.72 00:30:02.229 [2024-12-10T04:06:53.366Z] =================================================================================================================== 00:30:02.229 [2024-12-10T04:06:53.366Z] Total : 23821.03 93.05 0.00 0.00 5370.40 3245.59 26713.72 00:30:02.229 { 00:30:02.229 "results": [ 00:30:02.229 { 00:30:02.229 "job": "Nvme0n1", 00:30:02.229 "core_mask": "0x2", 00:30:02.229 "workload": "randwrite", 00:30:02.229 "status": "finished", 00:30:02.229 "queue_depth": 128, 00:30:02.229 "io_size": 4096, 00:30:02.229 "runtime": 10.004647, 00:30:02.229 "iops": 23821.030367188367, 00:30:02.229 "mibps": 93.05089987182956, 00:30:02.229 "io_failed": 0, 00:30:02.229 "io_timeout": 0, 00:30:02.229 "avg_latency_us": 5370.399489987594, 00:30:02.229 "min_latency_us": 3245.592380952381, 00:30:02.229 "max_latency_us": 26713.721904761904 00:30:02.229 } 00:30:02.229 ], 00:30:02.229 "core_count": 1 00:30:02.229 } 00:30:02.229 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 821095 00:30:02.229 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 821095 ']' 00:30:02.229 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 821095 00:30:02.229 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:02.229 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.229 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821095 00:30:02.488 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:02.488 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:02.488 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821095' 00:30:02.488 killing process with pid 821095 00:30:02.488 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 821095 00:30:02.488 Received shutdown signal, test time was about 10.000000 seconds 00:30:02.488 00:30:02.488 Latency(us) 00:30:02.488 [2024-12-10T04:06:53.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.488 [2024-12-10T04:06:53.625Z] =================================================================================================================== 00:30:02.488 [2024-12-10T04:06:53.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.488 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 821095 00:30:02.488 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.747 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.006 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:03.006 05:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:03.006 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:03.006 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:03.006 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 818113 00:30:03.006 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 818113 00:30:03.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 818113 Killed "${NVMF_APP[@]}" "$@" 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=822904 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 822904 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 822904 ']' 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.265 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.265 [2024-12-10 05:06:54.226624] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.265 [2024-12-10 05:06:54.227526] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:03.265 [2024-12-10 05:06:54.227565] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.265 [2024-12-10 05:06:54.306492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.265 [2024-12-10 05:06:54.345806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.265 [2024-12-10 05:06:54.345841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.265 [2024-12-10 05:06:54.345850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.265 [2024-12-10 05:06:54.345857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.265 [2024-12-10 05:06:54.345863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.265 [2024-12-10 05:06:54.346360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.524 [2024-12-10 05:06:54.414487] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.524 [2024-12-10 05:06:54.414705] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.525 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:03.784 [2024-12-10 05:06:54.659743] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:03.784 [2024-12-10 05:06:54.659951] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:03.784 [2024-12-10 05:06:54.660036] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a9ce379c-7857-47b5-8583-1ab66eb810a7 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a9ce379c-7857-47b5-8583-1ab66eb810a7 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:03.784 05:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9ce379c-7857-47b5-8583-1ab66eb810a7 -t 2000 00:30:04.043 [ 00:30:04.043 { 00:30:04.043 "name": "a9ce379c-7857-47b5-8583-1ab66eb810a7", 00:30:04.043 "aliases": [ 00:30:04.043 "lvs/lvol" 00:30:04.043 ], 00:30:04.043 "product_name": "Logical Volume", 00:30:04.043 "block_size": 4096, 00:30:04.043 "num_blocks": 38912, 00:30:04.043 "uuid": "a9ce379c-7857-47b5-8583-1ab66eb810a7", 00:30:04.043 "assigned_rate_limits": { 00:30:04.043 "rw_ios_per_sec": 0, 00:30:04.043 "rw_mbytes_per_sec": 0, 00:30:04.043 "r_mbytes_per_sec": 0, 00:30:04.043 "w_mbytes_per_sec": 0 00:30:04.043 }, 00:30:04.043 "claimed": false, 00:30:04.043 "zoned": false, 00:30:04.043 "supported_io_types": { 00:30:04.043 "read": true, 00:30:04.043 "write": true, 00:30:04.043 "unmap": true, 00:30:04.043 "flush": false, 00:30:04.043 "reset": true, 00:30:04.043 "nvme_admin": false, 00:30:04.043 "nvme_io": false, 00:30:04.043 "nvme_io_md": false, 00:30:04.043 "write_zeroes": true, 00:30:04.043 "zcopy": false, 00:30:04.043 "get_zone_info": false, 00:30:04.043 "zone_management": false, 00:30:04.043 "zone_append": false, 00:30:04.043 "compare": false, 00:30:04.043 "compare_and_write": false, 00:30:04.043 "abort": false, 00:30:04.043 "seek_hole": true, 00:30:04.043 "seek_data": true, 00:30:04.043 "copy": false, 00:30:04.043 "nvme_iov_md": false 00:30:04.043 }, 00:30:04.043 "driver_specific": { 00:30:04.043 "lvol": { 00:30:04.043 "lvol_store_uuid": "db1bcb17-cf58-410c-bad1-8c72d3000207", 00:30:04.043 "base_bdev": "aio_bdev", 00:30:04.043 "thin_provision": false, 00:30:04.043 "num_allocated_clusters": 38, 00:30:04.043 "snapshot": false, 00:30:04.043 "clone": false, 00:30:04.043 "esnap_clone": false 00:30:04.043 } 00:30:04.043 } 00:30:04.043 } 00:30:04.043 ] 00:30:04.043 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:04.043 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:04.043 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:04.301 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:04.301 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:04.301 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:04.560 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:04.560 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:04.560 [2024-12-10 05:06:55.655088] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:04.819 request: 00:30:04.819 { 00:30:04.819 "uuid": "db1bcb17-cf58-410c-bad1-8c72d3000207", 00:30:04.819 "method": "bdev_lvol_get_lvstores", 00:30:04.819 "req_id": 1 00:30:04.819 } 00:30:04.819 Got JSON-RPC error response 00:30:04.819 response: 00:30:04.819 { 00:30:04.819 "code": -19, 00:30:04.819 "message": "No such device" 00:30:04.819 } 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:04.819 05:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:05.078 aio_bdev 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9ce379c-7857-47b5-8583-1ab66eb810a7 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a9ce379c-7857-47b5-8583-1ab66eb810a7 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:05.078 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:05.337 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9ce379c-7857-47b5-8583-1ab66eb810a7 -t 2000 00:30:05.596 [ 00:30:05.596 { 00:30:05.596 "name": "a9ce379c-7857-47b5-8583-1ab66eb810a7", 00:30:05.596 "aliases": [ 00:30:05.596 "lvs/lvol" 00:30:05.596 ], 00:30:05.596 "product_name": "Logical Volume", 00:30:05.596 "block_size": 4096, 00:30:05.596 "num_blocks": 38912, 00:30:05.596 "uuid": "a9ce379c-7857-47b5-8583-1ab66eb810a7", 00:30:05.596 "assigned_rate_limits": { 00:30:05.596 "rw_ios_per_sec": 0, 00:30:05.596 "rw_mbytes_per_sec": 0, 00:30:05.596 "r_mbytes_per_sec": 0, 00:30:05.596 "w_mbytes_per_sec": 0 00:30:05.596 }, 00:30:05.596 "claimed": false, 00:30:05.596 "zoned": false, 00:30:05.596 "supported_io_types": { 00:30:05.596 "read": true, 00:30:05.596 "write": true, 00:30:05.596 "unmap": true, 00:30:05.596 "flush": false, 00:30:05.596 "reset": true, 00:30:05.596 "nvme_admin": false, 00:30:05.596 "nvme_io": false, 00:30:05.596 "nvme_io_md": false, 00:30:05.596 "write_zeroes": true, 00:30:05.596 "zcopy": false, 00:30:05.596 "get_zone_info": false, 00:30:05.596 "zone_management": false, 00:30:05.596 "zone_append": false, 00:30:05.596 "compare": false, 00:30:05.596 "compare_and_write": false, 00:30:05.596 "abort": false, 00:30:05.596 "seek_hole": true, 00:30:05.596 "seek_data": true, 00:30:05.596 "copy": false, 00:30:05.596 "nvme_iov_md": false 00:30:05.596 }, 00:30:05.596 "driver_specific": { 00:30:05.596 "lvol": { 00:30:05.596 "lvol_store_uuid": "db1bcb17-cf58-410c-bad1-8c72d3000207", 00:30:05.596 "base_bdev": "aio_bdev", 00:30:05.596 "thin_provision": false, 00:30:05.596 "num_allocated_clusters": 38, 00:30:05.596 "snapshot": false, 00:30:05.596 "clone": false, 00:30:05.596 "esnap_clone": false 00:30:05.596 } 00:30:05.596 } 00:30:05.596 } 00:30:05.596 ] 00:30:05.596 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:05.596 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:05.596 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:05.597 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:05.597 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:05.597 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:05.856 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:05.856 05:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9ce379c-7857-47b5-8583-1ab66eb810a7 00:30:06.115 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db1bcb17-cf58-410c-bad1-8c72d3000207 00:30:06.374 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:06.374 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:06.634 00:30:06.634 real 0m16.933s 00:30:06.634 user 0m34.508s 00:30:06.634 sys 0m3.617s 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:06.634 ************************************ 00:30:06.634 END TEST lvs_grow_dirty 00:30:06.634 ************************************ 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:06.634 nvmf_trace.0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.634 rmmod nvme_tcp 00:30:06.634 rmmod nvme_fabrics 00:30:06.634 rmmod nvme_keyring 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 822904 ']' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 822904 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 822904 ']' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 822904 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 822904 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 822904' 00:30:06.634 killing process with pid 822904 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 822904 00:30:06.634 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 822904 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.893 05:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.429 05:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.429 00:30:09.429 real 0m41.631s 00:30:09.429 user 0m52.033s 00:30:09.429 sys 0m10.005s 00:30:09.429 05:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.429 05:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.429 ************************************ 00:30:09.429 END TEST nvmf_lvs_grow 00:30:09.429 ************************************ 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.429 ************************************ 00:30:09.429 START TEST nvmf_bdev_io_wait 00:30:09.429 ************************************ 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:09.429 * Looking for test storage... 00:30:09.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.429 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.430 --rc genhtml_branch_coverage=1 00:30:09.430 --rc genhtml_function_coverage=1 00:30:09.430 --rc genhtml_legend=1 00:30:09.430 --rc geninfo_all_blocks=1 00:30:09.430 --rc geninfo_unexecuted_blocks=1 00:30:09.430 00:30:09.430 ' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.430 --rc genhtml_branch_coverage=1 00:30:09.430 --rc genhtml_function_coverage=1 00:30:09.430 --rc genhtml_legend=1 00:30:09.430 --rc geninfo_all_blocks=1 00:30:09.430 --rc geninfo_unexecuted_blocks=1 00:30:09.430 00:30:09.430 ' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.430 --rc genhtml_branch_coverage=1 00:30:09.430 --rc genhtml_function_coverage=1 00:30:09.430 --rc genhtml_legend=1 00:30:09.430 --rc geninfo_all_blocks=1 00:30:09.430 --rc geninfo_unexecuted_blocks=1 00:30:09.430 00:30:09.430 ' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.430 --rc genhtml_branch_coverage=1 00:30:09.430 --rc genhtml_function_coverage=1 00:30:09.430 --rc genhtml_legend=1 00:30:09.430 --rc geninfo_all_blocks=1 00:30:09.430 --rc geninfo_unexecuted_blocks=1 00:30:09.430 00:30:09.430 ' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.430 05:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:16.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:16.000 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:16.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:16.001 Found net devices under 0000:af:00.0: cvl_0_0 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:16.001 Found net devices under 0000:af:00.1: cvl_0_1 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.001 05:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:30:16.001 00:30:16.001 --- 10.0.0.2 ping statistics --- 00:30:16.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.001 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:30:16.001 00:30:16.001 --- 10.0.0.1 ping statistics --- 00:30:16.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.001 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=827071 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 827071 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 827071 ']' 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.001 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.001 [2024-12-10 05:07:06.196982] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:16.001 [2024-12-10 05:07:06.197845] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:16.001 [2024-12-10 05:07:06.197876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.001 [2024-12-10 05:07:06.273586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.001 [2024-12-10 05:07:06.312968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.002 [2024-12-10 05:07:06.313005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.002 [2024-12-10 05:07:06.313012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.002 [2024-12-10 05:07:06.313018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.002 [2024-12-10 05:07:06.313023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.002 [2024-12-10 05:07:06.314489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.002 [2024-12-10 05:07:06.314594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.002 [2024-12-10 05:07:06.314681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.002 [2024-12-10 05:07:06.314682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.002 [2024-12-10 05:07:06.315044] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 [2024-12-10 05:07:06.459194] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:16.002 [2024-12-10 05:07:06.459354] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:16.002 [2024-12-10 05:07:06.459751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:16.002 [2024-12-10 05:07:06.459927] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 [2024-12-10 05:07:06.471542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 Malloc0 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.002 [2024-12-10 05:07:06.543594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=827100 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=827102 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.002 { 00:30:16.002 "params": { 00:30:16.002 "name": "Nvme$subsystem", 00:30:16.002 "trtype": "$TEST_TRANSPORT", 00:30:16.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.002 "adrfam": "ipv4", 00:30:16.002 "trsvcid": "$NVMF_PORT", 00:30:16.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.002 "hdgst": ${hdgst:-false}, 00:30:16.002 "ddgst": ${ddgst:-false} 00:30:16.002 }, 00:30:16.002 "method": "bdev_nvme_attach_controller" 00:30:16.002 } 00:30:16.002 EOF 00:30:16.002 )") 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=827104 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.002 { 00:30:16.002 "params": { 00:30:16.002 "name": "Nvme$subsystem", 00:30:16.002 "trtype": "$TEST_TRANSPORT", 00:30:16.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.002 "adrfam": "ipv4", 00:30:16.002 "trsvcid": "$NVMF_PORT", 00:30:16.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.002 "hdgst": ${hdgst:-false}, 00:30:16.002 "ddgst": ${ddgst:-false} 00:30:16.002 }, 00:30:16.002 "method": "bdev_nvme_attach_controller" 00:30:16.002 } 00:30:16.002 EOF 00:30:16.002 )") 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=827107 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.002 { 00:30:16.002 "params": { 00:30:16.002 "name": "Nvme$subsystem", 00:30:16.002 "trtype": "$TEST_TRANSPORT", 00:30:16.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.002 "adrfam": "ipv4", 00:30:16.002 "trsvcid": "$NVMF_PORT", 00:30:16.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.002 "hdgst": ${hdgst:-false}, 00:30:16.002 "ddgst": ${ddgst:-false} 00:30:16.002 }, 00:30:16.002 "method": "bdev_nvme_attach_controller" 00:30:16.002 } 00:30:16.002 EOF 00:30:16.002 )") 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.002 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.003 { 00:30:16.003 "params": { 00:30:16.003 "name": "Nvme$subsystem", 00:30:16.003 "trtype": "$TEST_TRANSPORT", 00:30:16.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.003 "adrfam": "ipv4", 00:30:16.003 "trsvcid": "$NVMF_PORT", 00:30:16.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.003 "hdgst": ${hdgst:-false}, 00:30:16.003 "ddgst": ${ddgst:-false} 00:30:16.003 }, 00:30:16.003 "method": "bdev_nvme_attach_controller" 00:30:16.003 } 00:30:16.003 EOF 00:30:16.003 )") 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 827100 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.003 "params": { 00:30:16.003 "name": "Nvme1", 00:30:16.003 "trtype": "tcp", 00:30:16.003 "traddr": "10.0.0.2", 00:30:16.003 "adrfam": "ipv4", 00:30:16.003 "trsvcid": "4420", 00:30:16.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.003 "hdgst": false, 00:30:16.003 "ddgst": false 00:30:16.003 }, 00:30:16.003 "method": "bdev_nvme_attach_controller" 00:30:16.003 }' 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.003 "params": { 00:30:16.003 "name": "Nvme1", 00:30:16.003 "trtype": "tcp", 00:30:16.003 "traddr": "10.0.0.2", 00:30:16.003 "adrfam": "ipv4", 00:30:16.003 "trsvcid": "4420", 00:30:16.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.003 "hdgst": false, 00:30:16.003 "ddgst": false 00:30:16.003 }, 00:30:16.003 "method": "bdev_nvme_attach_controller" 00:30:16.003 }' 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.003 "params": { 00:30:16.003 "name": "Nvme1", 00:30:16.003 "trtype": "tcp", 00:30:16.003 "traddr": "10.0.0.2", 00:30:16.003 "adrfam": "ipv4", 00:30:16.003 "trsvcid": "4420", 00:30:16.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.003 "hdgst": false, 00:30:16.003 "ddgst": false 00:30:16.003 }, 00:30:16.003 "method": "bdev_nvme_attach_controller" 00:30:16.003 }' 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:16.003 05:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.003 "params": { 00:30:16.003 "name": "Nvme1", 00:30:16.003 "trtype": "tcp", 00:30:16.003 "traddr": "10.0.0.2", 00:30:16.003 "adrfam": "ipv4", 00:30:16.003 "trsvcid": "4420", 00:30:16.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.003 "hdgst": false, 00:30:16.003 "ddgst": false 00:30:16.003 }, 00:30:16.003 "method": "bdev_nvme_attach_controller" 00:30:16.003 }' 00:30:16.003 [2024-12-10 05:07:06.595768] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:16.003 [2024-12-10 05:07:06.595821] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:16.003 [2024-12-10 05:07:06.596159] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:16.003 [2024-12-10 05:07:06.596216] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:16.003 [2024-12-10 05:07:06.598374] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:16.003 [2024-12-10 05:07:06.598420] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:16.003 [2024-12-10 05:07:06.599089] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:16.003 [2024-12-10 05:07:06.599129] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:16.003 [2024-12-10 05:07:06.791247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.003 [2024-12-10 05:07:06.837464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.003 [2024-12-10 05:07:06.844094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.003 [2024-12-10 05:07:06.882215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:16.003 [2024-12-10 05:07:06.940468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.003 [2024-12-10 05:07:06.989530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.003 [2024-12-10 05:07:06.995584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:16.003 [2024-12-10 05:07:07.030645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:16.003 Running I/O for 1 seconds... 00:30:16.261 Running I/O for 1 seconds... 00:30:16.261 Running I/O for 1 seconds... 00:30:16.261 Running I/O for 1 seconds... 00:30:17.197 8217.00 IOPS, 32.10 MiB/s 00:30:17.197 Latency(us) 00:30:17.197 [2024-12-10T04:07:08.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.197 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:17.197 Nvme1n1 : 1.02 8221.75 32.12 0.00 0.00 15462.12 1482.36 23468.13 00:30:17.197 [2024-12-10T04:07:08.334Z] =================================================================================================================== 00:30:17.197 [2024-12-10T04:07:08.334Z] Total : 8221.75 32.12 0.00 0.00 15462.12 1482.36 23468.13 00:30:17.197 243208.00 IOPS, 950.03 MiB/s 00:30:17.197 Latency(us) 00:30:17.197 [2024-12-10T04:07:08.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.197 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:17.197 Nvme1n1 : 1.00 242840.33 948.60 0.00 0.00 523.97 222.35 1497.97 00:30:17.197 [2024-12-10T04:07:08.334Z] =================================================================================================================== 00:30:17.197 [2024-12-10T04:07:08.334Z] Total : 242840.33 948.60 0.00 0.00 523.97 222.35 1497.97 00:30:17.197 7612.00 IOPS, 29.73 MiB/s 00:30:17.197 Latency(us) 00:30:17.198 [2024-12-10T04:07:08.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.198 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:17.198 Nvme1n1 : 1.01 7694.33 30.06 0.00 0.00 16585.56 4993.22 25715.08 00:30:17.198 [2024-12-10T04:07:08.335Z] =================================================================================================================== 00:30:17.198 [2024-12-10T04:07:08.335Z] Total : 7694.33 30.06 0.00 0.00 16585.56 4993.22 25715.08 00:30:17.198 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 827102 00:30:17.198 13486.00 IOPS, 52.68 MiB/s 00:30:17.198 Latency(us) 00:30:17.198 [2024-12-10T04:07:08.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.198 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:17.198 Nvme1n1 : 1.00 13573.42 53.02 0.00 0.00 9410.79 2543.42 14105.84 00:30:17.198 [2024-12-10T04:07:08.335Z] =================================================================================================================== 00:30:17.198 [2024-12-10T04:07:08.335Z] Total : 13573.42 53.02 0.00 0.00 9410.79 2543.42 14105.84 00:30:17.198 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 827104 00:30:17.198 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 827107 00:30:17.457 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.457 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.458 rmmod nvme_tcp 00:30:17.458 rmmod nvme_fabrics 00:30:17.458 rmmod nvme_keyring 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 827071 ']' 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 827071 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 827071 ']' 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 827071 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 827071 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 827071' 00:30:17.458 killing process with pid 827071 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 827071 00:30:17.458 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 827071 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.717 05:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.623 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.623 00:30:19.623 real 0m10.687s 00:30:19.623 user 0m14.823s 00:30:19.623 sys 0m6.472s 00:30:19.623 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.623 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:19.623 ************************************ 00:30:19.623 END TEST nvmf_bdev_io_wait 00:30:19.623 ************************************ 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:19.882 ************************************ 00:30:19.882 START TEST nvmf_queue_depth 00:30:19.882 ************************************ 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:19.882 * Looking for test storage... 00:30:19.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.882 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:19.883 05:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:19.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.883 --rc genhtml_branch_coverage=1 00:30:19.883 --rc genhtml_function_coverage=1 00:30:19.883 --rc genhtml_legend=1 00:30:19.883 --rc geninfo_all_blocks=1 00:30:19.883 --rc geninfo_unexecuted_blocks=1 00:30:19.883 00:30:19.883 ' 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:19.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.883 --rc genhtml_branch_coverage=1 00:30:19.883 --rc genhtml_function_coverage=1 00:30:19.883 --rc genhtml_legend=1 00:30:19.883 --rc geninfo_all_blocks=1 00:30:19.883 --rc geninfo_unexecuted_blocks=1 00:30:19.883 00:30:19.883 ' 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:19.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.883 --rc genhtml_branch_coverage=1 00:30:19.883 --rc genhtml_function_coverage=1 00:30:19.883 --rc genhtml_legend=1 00:30:19.883 --rc geninfo_all_blocks=1 00:30:19.883 --rc geninfo_unexecuted_blocks=1 00:30:19.883 00:30:19.883 ' 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:19.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.883 --rc genhtml_branch_coverage=1 00:30:19.883 --rc genhtml_function_coverage=1 00:30:19.883 --rc genhtml_legend=1 00:30:19.883 --rc geninfo_all_blocks=1 00:30:19.883 --rc geninfo_unexecuted_blocks=1 00:30:19.883 00:30:19.883 ' 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.883 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.143 05:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:26.714 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:26.714 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:26.714 Found net devices under 0000:af:00.0: cvl_0_0 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:26.714 Found net devices under 0000:af:00.1: cvl_0_1 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.714 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:30:26.715 00:30:26.715 --- 10.0.0.2 ping statistics --- 00:30:26.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.715 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:26.715 00:30:26.715 --- 10.0.0.1 ping statistics --- 00:30:26.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.715 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.715 05:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=830818 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 830818 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 830818 ']' 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 [2024-12-10 05:07:17.069484] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:26.715 [2024-12-10 05:07:17.070503] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:26.715 [2024-12-10 05:07:17.070545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.715 [2024-12-10 05:07:17.152005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.715 [2024-12-10 05:07:17.191260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.715 [2024-12-10 05:07:17.191294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.715 [2024-12-10 05:07:17.191301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.715 [2024-12-10 05:07:17.191307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.715 [2024-12-10 05:07:17.191311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.715 [2024-12-10 05:07:17.191779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.715 [2024-12-10 05:07:17.259569] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:26.715 [2024-12-10 05:07:17.259790] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 [2024-12-10 05:07:17.320524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 Malloc0 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.715 [2024-12-10 05:07:17.400587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=831019 00:30:26.715 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 831019 /var/tmp/bdevperf.sock 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 831019 ']' 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.716 [2024-12-10 05:07:17.442243] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:26.716 [2024-12-10 05:07:17.442299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831019 ] 00:30:26.716 [2024-12-10 05:07:17.516557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.716 [2024-12-10 05:07:17.558287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.716 NVMe0n1 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.716 05:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.716 Running I/O for 10 seconds... 00:30:29.030 12264.00 IOPS, 47.91 MiB/s [2024-12-10T04:07:21.104Z] 12273.50 IOPS, 47.94 MiB/s [2024-12-10T04:07:22.040Z] 12287.67 IOPS, 48.00 MiB/s [2024-12-10T04:07:22.975Z] 12331.00 IOPS, 48.17 MiB/s [2024-12-10T04:07:23.911Z] 12436.80 IOPS, 48.58 MiB/s [2024-12-10T04:07:24.856Z] 12450.00 IOPS, 48.63 MiB/s [2024-12-10T04:07:26.234Z] 12453.86 IOPS, 48.65 MiB/s [2024-12-10T04:07:27.172Z] 12524.25 IOPS, 48.92 MiB/s [2024-12-10T04:07:28.111Z] 12537.11 IOPS, 48.97 MiB/s [2024-12-10T04:07:28.111Z] 12591.00 IOPS, 49.18 MiB/s 00:30:36.974 Latency(us) 00:30:36.974 [2024-12-10T04:07:28.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.974 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:36.974 Verification LBA range: start 0x0 length 0x4000 00:30:36.974 NVMe0n1 : 10.06 12609.05 49.25 0.00 0.00 80953.69 19223.89 51430.16 00:30:36.974 [2024-12-10T04:07:28.111Z] =================================================================================================================== 00:30:36.974 [2024-12-10T04:07:28.111Z] Total : 12609.05 49.25 0.00 0.00 80953.69 19223.89 51430.16 00:30:36.974 { 00:30:36.974 "results": [ 00:30:36.974 { 00:30:36.974 "job": "NVMe0n1", 00:30:36.974 "core_mask": "0x1", 00:30:36.974 "workload": "verify", 00:30:36.974 "status": "finished", 00:30:36.974 "verify_range": { 00:30:36.974 "start": 0, 00:30:36.974 "length": 16384 00:30:36.974 }, 00:30:36.974 "queue_depth": 1024, 00:30:36.974 "io_size": 4096, 00:30:36.974 "runtime": 10.06491, 00:30:36.974 "iops": 12609.05462641991, 00:30:36.974 "mibps": 49.25411963445277, 00:30:36.974 "io_failed": 0, 00:30:36.974 "io_timeout": 0, 00:30:36.974 "avg_latency_us": 80953.68513853007, 00:30:36.974 "min_latency_us": 19223.893333333333, 00:30:36.974 "max_latency_us": 51430.15619047619 00:30:36.974 } 00:30:36.974 ], 00:30:36.974 "core_count": 1 00:30:36.974 } 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 831019 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 831019 ']' 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 831019 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831019 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831019' 00:30:36.974 killing process with pid 831019 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 831019 00:30:36.974 Received shutdown signal, test time was about 10.000000 seconds 00:30:36.974 00:30:36.974 Latency(us) 00:30:36.974 [2024-12-10T04:07:28.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.974 [2024-12-10T04:07:28.111Z] =================================================================================================================== 00:30:36.974 [2024-12-10T04:07:28.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.974 05:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 831019 00:30:37.233 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:37.233 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:37.233 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.233 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:37.233 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.234 rmmod nvme_tcp 00:30:37.234 rmmod nvme_fabrics 00:30:37.234 rmmod nvme_keyring 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 830818 ']' 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 830818 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 830818 ']' 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 830818 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 830818 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 830818' 00:30:37.234 killing process with pid 830818 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 830818 00:30:37.234 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 830818 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.493 05:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.399 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.399 00:30:39.399 real 0m19.673s 00:30:39.399 user 0m22.646s 00:30:39.399 sys 0m6.187s 00:30:39.399 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.399 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.399 ************************************ 00:30:39.399 END TEST nvmf_queue_depth 00:30:39.399 ************************************ 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.661 ************************************ 00:30:39.661 START TEST nvmf_target_multipath 00:30:39.661 ************************************ 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:39.661 * Looking for test storage... 00:30:39.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:39.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.661 --rc genhtml_branch_coverage=1 00:30:39.661 --rc genhtml_function_coverage=1 00:30:39.661 --rc genhtml_legend=1 00:30:39.661 --rc geninfo_all_blocks=1 00:30:39.661 --rc geninfo_unexecuted_blocks=1 00:30:39.661 00:30:39.661 ' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:39.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.661 --rc genhtml_branch_coverage=1 00:30:39.661 --rc genhtml_function_coverage=1 00:30:39.661 --rc genhtml_legend=1 00:30:39.661 --rc geninfo_all_blocks=1 00:30:39.661 --rc geninfo_unexecuted_blocks=1 00:30:39.661 00:30:39.661 ' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:39.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.661 --rc genhtml_branch_coverage=1 00:30:39.661 --rc genhtml_function_coverage=1 00:30:39.661 --rc genhtml_legend=1 00:30:39.661 --rc geninfo_all_blocks=1 00:30:39.661 --rc geninfo_unexecuted_blocks=1 00:30:39.661 00:30:39.661 ' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:39.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.661 --rc genhtml_branch_coverage=1 00:30:39.661 --rc genhtml_function_coverage=1 00:30:39.661 --rc genhtml_legend=1 00:30:39.661 --rc geninfo_all_blocks=1 00:30:39.661 --rc geninfo_unexecuted_blocks=1 00:30:39.661 00:30:39.661 ' 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.661 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.662 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.921 05:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:45.301 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:45.301 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:45.301 Found net devices under 0000:af:00.0: cvl_0_0 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:45.301 Found net devices under 0000:af:00.1: cvl_0_1 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.301 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:30:45.561 00:30:45.561 --- 10.0.0.2 ping statistics --- 00:30:45.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.561 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:30:45.561 00:30:45.561 --- 10.0.0.1 ping statistics --- 00:30:45.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.561 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:45.561 only one NIC for nvmf test 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.561 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.561 rmmod nvme_tcp 00:30:45.561 rmmod nvme_fabrics 00:30:45.561 rmmod nvme_keyring 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.820 05:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.724 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.724 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:47.724 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.725 00:30:47.725 real 0m8.237s 00:30:47.725 user 0m1.769s 00:30:47.725 sys 0m4.490s 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:47.725 ************************************ 00:30:47.725 END TEST nvmf_target_multipath 00:30:47.725 ************************************ 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:47.725 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:47.984 ************************************ 00:30:47.984 START TEST nvmf_zcopy 00:30:47.984 ************************************ 00:30:47.984 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:47.984 * Looking for test storage... 00:30:47.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.985 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:47.985 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:47.985 05:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.985 --rc genhtml_branch_coverage=1 00:30:47.985 --rc genhtml_function_coverage=1 00:30:47.985 --rc genhtml_legend=1 00:30:47.985 --rc geninfo_all_blocks=1 00:30:47.985 --rc geninfo_unexecuted_blocks=1 00:30:47.985 00:30:47.985 ' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.985 --rc genhtml_branch_coverage=1 00:30:47.985 --rc genhtml_function_coverage=1 00:30:47.985 --rc genhtml_legend=1 00:30:47.985 --rc geninfo_all_blocks=1 00:30:47.985 --rc geninfo_unexecuted_blocks=1 00:30:47.985 00:30:47.985 ' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.985 --rc genhtml_branch_coverage=1 00:30:47.985 --rc genhtml_function_coverage=1 00:30:47.985 --rc genhtml_legend=1 00:30:47.985 --rc geninfo_all_blocks=1 00:30:47.985 --rc geninfo_unexecuted_blocks=1 00:30:47.985 00:30:47.985 ' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:47.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.985 --rc genhtml_branch_coverage=1 00:30:47.985 --rc genhtml_function_coverage=1 00:30:47.985 --rc genhtml_legend=1 00:30:47.985 --rc geninfo_all_blocks=1 00:30:47.985 --rc geninfo_unexecuted_blocks=1 00:30:47.985 00:30:47.985 ' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.985 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:47.986 05:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:54.558 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:54.558 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.558 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:54.559 Found net devices under 0000:af:00.0: cvl_0_0 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:54.559 Found net devices under 0000:af:00.1: cvl_0_1 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:30:54.559 00:30:54.559 --- 10.0.0.2 ping statistics --- 00:30:54.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.559 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:30:54.559 00:30:54.559 --- 10.0.0.1 ping statistics --- 00:30:54.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.559 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.559 05:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=839560 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 839560 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 839560 ']' 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.559 [2024-12-10 05:07:45.068291] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.559 [2024-12-10 05:07:45.069196] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:54.559 [2024-12-10 05:07:45.069230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.559 [2024-12-10 05:07:45.147585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.559 [2024-12-10 05:07:45.186175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.559 [2024-12-10 05:07:45.186207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.559 [2024-12-10 05:07:45.186214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.559 [2024-12-10 05:07:45.186220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.559 [2024-12-10 05:07:45.186225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.559 [2024-12-10 05:07:45.186664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.559 [2024-12-10 05:07:45.252444] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.559 [2024-12-10 05:07:45.252639] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.559 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 [2024-12-10 05:07:45.319342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 [2024-12-10 05:07:45.347545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 malloc0 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.560 { 00:30:54.560 "params": { 00:30:54.560 "name": "Nvme$subsystem", 00:30:54.560 "trtype": "$TEST_TRANSPORT", 00:30:54.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.560 "adrfam": "ipv4", 00:30:54.560 "trsvcid": "$NVMF_PORT", 00:30:54.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.560 "hdgst": ${hdgst:-false}, 00:30:54.560 "ddgst": ${ddgst:-false} 00:30:54.560 }, 00:30:54.560 "method": "bdev_nvme_attach_controller" 00:30:54.560 } 00:30:54.560 EOF 00:30:54.560 )") 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:54.560 05:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.560 "params": { 00:30:54.560 "name": "Nvme1", 00:30:54.560 "trtype": "tcp", 00:30:54.560 "traddr": "10.0.0.2", 00:30:54.560 "adrfam": "ipv4", 00:30:54.560 "trsvcid": "4420", 00:30:54.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.560 "hdgst": false, 00:30:54.560 "ddgst": false 00:30:54.560 }, 00:30:54.560 "method": "bdev_nvme_attach_controller" 00:30:54.560 }' 00:30:54.560 [2024-12-10 05:07:45.444254] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:30:54.560 [2024-12-10 05:07:45.444297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839587 ] 00:30:54.560 [2024-12-10 05:07:45.519063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.560 [2024-12-10 05:07:45.558245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.819 Running I/O for 10 seconds... 00:30:57.135 8556.00 IOPS, 66.84 MiB/s [2024-12-10T04:07:49.209Z] 8547.00 IOPS, 66.77 MiB/s [2024-12-10T04:07:50.146Z] 8591.67 IOPS, 67.12 MiB/s [2024-12-10T04:07:51.084Z] 8613.75 IOPS, 67.29 MiB/s [2024-12-10T04:07:52.022Z] 8565.00 IOPS, 66.91 MiB/s [2024-12-10T04:07:52.959Z] 8570.33 IOPS, 66.96 MiB/s [2024-12-10T04:07:54.338Z] 8585.43 IOPS, 67.07 MiB/s [2024-12-10T04:07:54.906Z] 8601.50 IOPS, 67.20 MiB/s [2024-12-10T04:07:56.283Z] 8610.33 IOPS, 67.27 MiB/s [2024-12-10T04:07:56.283Z] 8617.60 IOPS, 67.33 MiB/s 00:31:05.146 Latency(us) 00:31:05.146 [2024-12-10T04:07:56.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.147 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:05.147 Verification LBA range: start 0x0 length 0x1000 00:31:05.147 Nvme1n1 : 10.05 8584.16 67.06 0.00 0.00 14813.41 2262.55 43690.67 00:31:05.147 [2024-12-10T04:07:56.284Z] =================================================================================================================== 00:31:05.147 [2024-12-10T04:07:56.284Z] Total : 8584.16 67.06 0.00 0.00 14813.41 2262.55 43690.67 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=841350 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.147 { 00:31:05.147 "params": { 00:31:05.147 "name": "Nvme$subsystem", 00:31:05.147 "trtype": "$TEST_TRANSPORT", 00:31:05.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.147 "adrfam": "ipv4", 00:31:05.147 "trsvcid": "$NVMF_PORT", 00:31:05.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.147 "hdgst": ${hdgst:-false}, 00:31:05.147 "ddgst": ${ddgst:-false} 00:31:05.147 }, 00:31:05.147 "method": "bdev_nvme_attach_controller" 00:31:05.147 } 00:31:05.147 EOF 00:31:05.147 )") 00:31:05.147 [2024-12-10 05:07:56.110997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.111028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:05.147 [2024-12-10 05:07:56.118968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.118981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:05.147 05:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.147 "params": { 00:31:05.147 "name": "Nvme1", 00:31:05.147 "trtype": "tcp", 00:31:05.147 "traddr": "10.0.0.2", 00:31:05.147 "adrfam": "ipv4", 00:31:05.147 "trsvcid": "4420", 00:31:05.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.147 "hdgst": false, 00:31:05.147 "ddgst": false 00:31:05.147 }, 00:31:05.147 "method": "bdev_nvme_attach_controller" 00:31:05.147 }' 00:31:05.147 [2024-12-10 05:07:56.126964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.126975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.134963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.134975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.146966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.146977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.150737] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:31:05.147 [2024-12-10 05:07:56.150780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841350 ] 00:31:05.147 [2024-12-10 05:07:56.158964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.158975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.170963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.170974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.178965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.178975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.186963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.186973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.194963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.194974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.202963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.202973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.210962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.210971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.218970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.218985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.223286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.147 [2024-12-10 05:07:56.230966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.230977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.242965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.242978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.254964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.254974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.147 [2024-12-10 05:07:56.263076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.147 [2024-12-10 05:07:56.266965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.147 [2024-12-10 05:07:56.266980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.278978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.278997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.290971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.290987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.302970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.302983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.314965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.314976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.326968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.326980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.338966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.338977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.350976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.350997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.362969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.362983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.374968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.374983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.386963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.386974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.398964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.398976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.410963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.410973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.422967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.422982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.434966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.434978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.446964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.446974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.458963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.458973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.470965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.470978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.482964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.482974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.494962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.494971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.506963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.506973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.518966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.518979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.407 [2024-12-10 05:07:56.530963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.407 [2024-12-10 05:07:56.530973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.542963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.542974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.554964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.554975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.601437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.601455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.610967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.610979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 Running I/O for 5 seconds... 00:31:05.667 [2024-12-10 05:07:56.624576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.624597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.638973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.638994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.650196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.650216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.664947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.664966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.679415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.679433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.691516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.691534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.706378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.706397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.719843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.719862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.731654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.731672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.747326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.747345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.762752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.762775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.776967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.776986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.667 [2024-12-10 05:07:56.791607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.667 [2024-12-10 05:07:56.791625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.807003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.807023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.817911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.817930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.832404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.832424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.846943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.846962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.858904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.858923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.872957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.872977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.887537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.887555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.902756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.902775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.916365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.916384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.931217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.931238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.943582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.943601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.956615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.956634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.971182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.971202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.983000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.983019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:56.996857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:56.996876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:57.011036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:57.011056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:57.021374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:57.021399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:57.036231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:57.036251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.926 [2024-12-10 05:07:57.050874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.926 [2024-12-10 05:07:57.050894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.062059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.062079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.076759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.076780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.090995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.091015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.104616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.104635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.118988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.119007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.132680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.132699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.147524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.147545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.162636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.162655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.177214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.177234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.191808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.191827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.207506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.207526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.223419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.223439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.238916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.238938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.250360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.250379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.185 [2024-12-10 05:07:57.264454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.185 [2024-12-10 05:07:57.264474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.186 [2024-12-10 05:07:57.278923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.186 [2024-12-10 05:07:57.278944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.186 [2024-12-10 05:07:57.293261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.186 [2024-12-10 05:07:57.293288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.186 [2024-12-10 05:07:57.307748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.186 [2024-12-10 05:07:57.307768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.323229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.323251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.335686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.335705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.350722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.350741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.361403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.361422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.376250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.376275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.391500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.391519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.406645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.406667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.420807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.420829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.435517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.435536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.451565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.451584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.466813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.466832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.479616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.479634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.494781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.494799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.508718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.508736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.523030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.523049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.535851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.535869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.548240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.548260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.562958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.562981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.445 [2024-12-10 05:07:57.575708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.445 [2024-12-10 05:07:57.575727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.591306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.591325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.602695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.602714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.616650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.616668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 16860.00 IOPS, 131.72 MiB/s [2024-12-10T04:07:57.841Z] [2024-12-10 05:07:57.631927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.631946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.646369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.646389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.659601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.659620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.675367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.675386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.687795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.687813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.703548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.703566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.704 [2024-12-10 05:07:57.718924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.704 [2024-12-10 05:07:57.718943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.732364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.732384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.746793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.746812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.760204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.760223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.775056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.775076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.788489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.788508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.802747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.802767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.816068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.816087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.705 [2024-12-10 05:07:57.830839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.705 [2024-12-10 05:07:57.830859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.843731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.843752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.856728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.856747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.871530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.871549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.884557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.884576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.898967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.898988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.910478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.910498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.924860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.924879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.939073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.939091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.951548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.951567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.964792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.964811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.979179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.979197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:57.992626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:57.992646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.007161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.007185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.019833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.019852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.032583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.032602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.047164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.047188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.059360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.059378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.072733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.072752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.964 [2024-12-10 05:07:58.087615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.964 [2024-12-10 05:07:58.087632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.102673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.102694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.116891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.116910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.131371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.131389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.146693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.146713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.160552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.160571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.175480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.175499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.190813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.190832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.204209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.204227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.219188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.219207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.230056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.230075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.244413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.244431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.259283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.259301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.274947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.274966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.288535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.288554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.303184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.303203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.316773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.316792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.331477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.331495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.224 [2024-12-10 05:07:58.347591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.224 [2024-12-10 05:07:58.347610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.362837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.362857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.376896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.376916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.391580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.391600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.406955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.406975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.421374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.421394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.436502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.436524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.451056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.451077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.483 [2024-12-10 05:07:58.464456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.483 [2024-12-10 05:07:58.464477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.479079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.479100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.492703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.492723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.507232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.507251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.518385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.518405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.532772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.532792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.547081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.547101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.560062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.560081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.574568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.574588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.588596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.588616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.484 [2024-12-10 05:07:58.602843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.484 [2024-12-10 05:07:58.602862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.616450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.616470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 16921.00 IOPS, 132.20 MiB/s [2024-12-10T04:07:58.880Z] [2024-12-10 05:07:58.631151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.631177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.641755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.641775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.656264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.656285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.670867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.670886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.684054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.684074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.699190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.699210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.743 [2024-12-10 05:07:58.712665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.743 [2024-12-10 05:07:58.712684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.726987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.727006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.739143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.739162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.752518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.752536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.767094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.767113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.779828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.779847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.794622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.794642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.805508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.805527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.819992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.820012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.834710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.834729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.848832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.848852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.744 [2024-12-10 05:07:58.863287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.744 [2024-12-10 05:07:58.863306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.875901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.875926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.890706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.890725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.904888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.904906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.919661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.919681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.934922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.934942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.948541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.948561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.963131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.963150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.975305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.975324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:58.988667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:58.988686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.003053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.003072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.015648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.015667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.030428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.030447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.044707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.044726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.058862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.058880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.071614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.071632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.086191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.086211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.100099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.100117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.114906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.114924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.003 [2024-12-10 05:07:59.127431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.003 [2024-12-10 05:07:59.127449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.140499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.140522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.155054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.155073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.168188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.168206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.182847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.182865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.196077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.196096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.211051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.211070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.223787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.223806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.238899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.238919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.251780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.251800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.264420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.264439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.279292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.279311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.294995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.295014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.308963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.308982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.323424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.323443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.336489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.336508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.350987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.351006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.362720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.362739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.376539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.376558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.263 [2024-12-10 05:07:59.390720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.263 [2024-12-10 05:07:59.390740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.403092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.403116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.417024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.417042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.431211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.431230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.443301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.443320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.456692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.456712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.471715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.471735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.484628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.484647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.499498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.499517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.514004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.514023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.528233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.528252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.542519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.522 [2024-12-10 05:07:59.542538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.522 [2024-12-10 05:07:59.555944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.555963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.523 [2024-12-10 05:07:59.570600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.570618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.523 [2024-12-10 05:07:59.583861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.583879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.523 [2024-12-10 05:07:59.599155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.599179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.523 [2024-12-10 05:07:59.612649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.612668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.523 16955.67 IOPS, 132.47 MiB/s [2024-12-10T04:07:59.660Z] [2024-12-10 05:07:59.627571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.627590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.523 [2024-12-10 05:07:59.642339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.523 [2024-12-10 05:07:59.642359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.656929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.656950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.671780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.671798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.686491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.686510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.700828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.700846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.715073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.715092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.728442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.728461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.739392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.739411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.752826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.752845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.767260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.767279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.782898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.782918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.796552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.796570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.811193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.811212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.825067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.825086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.839627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.839645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.855429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.855449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.870573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.782 [2024-12-10 05:07:59.870592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.782 [2024-12-10 05:07:59.883791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.783 [2024-12-10 05:07:59.883812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.783 [2024-12-10 05:07:59.899579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.783 [2024-12-10 05:07:59.899599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.041 [2024-12-10 05:07:59.914838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.041 [2024-12-10 05:07:59.914859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.041 [2024-12-10 05:07:59.927262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.041 [2024-12-10 05:07:59.927282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.041 [2024-12-10 05:07:59.940888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.041 [2024-12-10 05:07:59.940907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.041 [2024-12-10 05:07:59.955850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.041 [2024-12-10 05:07:59.955868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.041 [2024-12-10 05:07:59.970836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.041 [2024-12-10 05:07:59.970856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:07:59.983276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:07:59.983295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:07:59.996778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:07:59.996798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.012725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.012747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.027529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.027550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.043224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.043245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.054516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.054536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.068984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.069005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.085064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.085084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.100028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.100049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.114591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.114612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.126080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.126100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.140733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.140754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.156016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.156036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.042 [2024-12-10 05:08:00.170660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.042 [2024-12-10 05:08:00.170680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.184769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.184790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.199541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.199567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.212818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.212838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.228067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.228086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.242942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.242963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.256672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.256693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.271276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.271297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.284703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.284722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.299590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.299609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.314988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.315008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.327583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.327603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.342932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.342951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.300 [2024-12-10 05:08:00.356000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.300 [2024-12-10 05:08:00.356019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.301 [2024-12-10 05:08:00.371645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.301 [2024-12-10 05:08:00.371663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.301 [2024-12-10 05:08:00.386921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.301 [2024-12-10 05:08:00.386940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.301 [2024-12-10 05:08:00.401260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.301 [2024-12-10 05:08:00.401279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.301 [2024-12-10 05:08:00.415673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.301 [2024-12-10 05:08:00.415691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.301 [2024-12-10 05:08:00.431475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.301 [2024-12-10 05:08:00.431494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.446520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.446540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.461184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.461202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.476004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.476028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.490650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.490670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.502901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.502923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.517010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.517029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.531364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.531382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.547370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.547388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.562375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.562394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.576278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.576312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.591171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.591191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.604525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.604544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.619304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.619322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 16918.75 IOPS, 132.18 MiB/s [2024-12-10T04:08:00.697Z] [2024-12-10 05:08:00.634786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.634805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.648875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.648894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.663525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.663546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.560 [2024-12-10 05:08:00.679487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.560 [2024-12-10 05:08:00.679507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.695295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.695316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.711126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.711146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.724635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.724654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.739342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.739360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.754547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.754571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.768706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.768725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.783193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.819 [2024-12-10 05:08:00.783212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.819 [2024-12-10 05:08:00.795838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.795857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.811499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.811518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.826574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.826594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.840837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.840857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.855505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.855523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.870481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.870502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.884686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.884705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.899282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.899301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.914872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.914892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.927292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.927310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.820 [2024-12-10 05:08:00.942394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.820 [2024-12-10 05:08:00.942413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:00.956962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:00.956982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:00.971458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:00.971476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:00.986551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:00.986570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.000991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.001010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.015690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.015709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.031006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.031030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.045078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.045098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.059735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.059754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.074747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.074765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.088920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.088939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.103630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.103648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.118830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.118849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.131988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.132007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.147394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.147412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.162632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.162652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.176430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.176449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.190870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.190889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.078 [2024-12-10 05:08:01.203641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.078 [2024-12-10 05:08:01.203660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.218768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.218787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.232019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.232038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.246743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.246762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.258464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.258483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.272633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.272653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.287565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.287584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.303137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.303158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.316880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.316900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.331685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.331704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.347448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.347467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.363206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.363225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.375512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.375531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.391104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.391123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.403495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.403514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.416746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.416766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.431296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.431315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.446703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.446722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.337 [2024-12-10 05:08:01.461193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.337 [2024-12-10 05:08:01.461213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.475698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.475717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.491053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.491074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.505022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.505042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.519428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.519447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.534449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.534470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.548096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.548116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.560731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.560750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.575580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.575600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.590491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.590511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.604490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.604509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 [2024-12-10 05:08:01.619127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.619147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 16924.20 IOPS, 132.22 MiB/s [2024-12-10T04:08:01.733Z] [2024-12-10 05:08:01.630615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.596 [2024-12-10 05:08:01.630633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.596 00:31:10.596 Latency(us) 00:31:10.596 [2024-12-10T04:08:01.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.596 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:10.596 Nvme1n1 : 5.01 16926.79 132.24 0.00 0.00 7555.22 1997.29 13731.35 00:31:10.596 [2024-12-10T04:08:01.733Z] =================================================================================================================== 00:31:10.597 [2024-12-10T04:08:01.734Z] Total : 16926.79 132.24 0.00 0.00 7555.22 1997.29 13731.35 00:31:10.597 [2024-12-10 05:08:01.638972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.638990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.650968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.650984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.662983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.663002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.674972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.674989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.686973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.686988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.698968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.698983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.710983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.710999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.597 [2024-12-10 05:08:01.722970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.597 [2024-12-10 05:08:01.722985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.856 [2024-12-10 05:08:01.734968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.856 [2024-12-10 05:08:01.734982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.856 [2024-12-10 05:08:01.746965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.856 [2024-12-10 05:08:01.746976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.856 [2024-12-10 05:08:01.758965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.856 [2024-12-10 05:08:01.758982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.856 [2024-12-10 05:08:01.770970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.856 [2024-12-10 05:08:01.770984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.856 [2024-12-10 05:08:01.782963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.856 [2024-12-10 05:08:01.782972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (841350) - No such process 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 841350 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:10.856 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.857 delay0 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.857 05:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:10.857 [2024-12-10 05:08:01.972309] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:17.419 Initializing NVMe Controllers 00:31:17.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.419 Initialization complete. Launching workers. 00:31:17.420 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 22066 00:31:17.420 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22184, failed to submit 119 00:31:17.420 success 22117, unsuccessful 67, failed 0 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.420 rmmod nvme_tcp 00:31:17.420 rmmod nvme_fabrics 00:31:17.420 rmmod nvme_keyring 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 839560 ']' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 839560 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 839560 ']' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 839560 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 839560 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 839560' 00:31:17.420 killing process with pid 839560 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 839560 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 839560 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.420 05:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.968 00:31:19.968 real 0m31.673s 00:31:19.968 user 0m40.919s 00:31:19.968 sys 0m12.567s 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:19.968 ************************************ 00:31:19.968 END TEST nvmf_zcopy 00:31:19.968 ************************************ 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.968 ************************************ 00:31:19.968 START TEST nvmf_nmic 00:31:19.968 ************************************ 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:19.968 * Looking for test storage... 00:31:19.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:19.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.968 --rc genhtml_branch_coverage=1 00:31:19.968 --rc genhtml_function_coverage=1 00:31:19.968 --rc genhtml_legend=1 00:31:19.968 --rc geninfo_all_blocks=1 00:31:19.968 --rc geninfo_unexecuted_blocks=1 00:31:19.968 00:31:19.968 ' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:19.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.968 --rc genhtml_branch_coverage=1 00:31:19.968 --rc genhtml_function_coverage=1 00:31:19.968 --rc genhtml_legend=1 00:31:19.968 --rc geninfo_all_blocks=1 00:31:19.968 --rc geninfo_unexecuted_blocks=1 00:31:19.968 00:31:19.968 ' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:19.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.968 --rc genhtml_branch_coverage=1 00:31:19.968 --rc genhtml_function_coverage=1 00:31:19.968 --rc genhtml_legend=1 00:31:19.968 --rc geninfo_all_blocks=1 00:31:19.968 --rc geninfo_unexecuted_blocks=1 00:31:19.968 00:31:19.968 ' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:19.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.968 --rc genhtml_branch_coverage=1 00:31:19.968 --rc genhtml_function_coverage=1 00:31:19.968 --rc genhtml_legend=1 00:31:19.968 --rc geninfo_all_blocks=1 00:31:19.968 --rc geninfo_unexecuted_blocks=1 00:31:19.968 00:31:19.968 ' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.968 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.969 05:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.550 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:26.551 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:26.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:26.551 Found net devices under 0000:af:00.0: cvl_0_0 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:26.551 Found net devices under 0000:af:00.1: cvl_0_1 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:31:26.551 00:31:26.551 --- 10.0.0.2 ping statistics --- 00:31:26.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.551 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:31:26.551 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:31:26.552 00:31:26.552 --- 10.0.0.1 ping statistics --- 00:31:26.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.552 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=846604 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 846604 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 846604 ']' 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.552 05:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 [2024-12-10 05:08:16.796225] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:26.552 [2024-12-10 05:08:16.797125] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:31:26.552 [2024-12-10 05:08:16.797159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.552 [2024-12-10 05:08:16.873621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.552 [2024-12-10 05:08:16.915057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.552 [2024-12-10 05:08:16.915092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.552 [2024-12-10 05:08:16.915099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.552 [2024-12-10 05:08:16.915106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.552 [2024-12-10 05:08:16.915112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.552 [2024-12-10 05:08:16.916548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.552 [2024-12-10 05:08:16.916657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.552 [2024-12-10 05:08:16.916764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.552 [2024-12-10 05:08:16.916765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.552 [2024-12-10 05:08:16.984338] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:26.552 [2024-12-10 05:08:16.985047] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:26.552 [2024-12-10 05:08:16.985240] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:26.552 [2024-12-10 05:08:16.985398] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:26.552 [2024-12-10 05:08:16.985462] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 [2024-12-10 05:08:17.053509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 Malloc0 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 [2024-12-10 05:08:17.141671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:26.552 test case1: single bdev can't be used in multiple subsystems 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.552 [2024-12-10 05:08:17.173123] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:26.552 [2024-12-10 05:08:17.173143] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:26.552 [2024-12-10 05:08:17.173150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.552 request: 00:31:26.552 { 00:31:26.552 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:26.552 "namespace": { 00:31:26.552 "bdev_name": "Malloc0", 00:31:26.552 "no_auto_visible": false, 00:31:26.552 "hide_metadata": false 00:31:26.552 }, 00:31:26.552 "method": "nvmf_subsystem_add_ns", 00:31:26.552 "req_id": 1 00:31:26.552 } 00:31:26.552 Got JSON-RPC error response 00:31:26.552 response: 00:31:26.552 { 00:31:26.552 "code": -32602, 00:31:26.552 "message": "Invalid parameters" 00:31:26.552 } 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:26.552 Adding namespace failed - expected result. 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:26.552 test case2: host connect to nvmf target in multiple paths 00:31:26.552 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:26.553 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.553 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.553 [2024-12-10 05:08:17.185213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:26.553 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.553 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:26.553 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:26.812 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:26.812 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:26.812 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:26.812 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:26.812 05:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:28.717 05:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:28.717 [global] 00:31:28.717 thread=1 00:31:28.717 invalidate=1 00:31:28.717 rw=write 00:31:28.717 time_based=1 00:31:28.717 runtime=1 00:31:28.717 ioengine=libaio 00:31:28.717 direct=1 00:31:28.717 bs=4096 00:31:28.717 iodepth=1 00:31:28.717 norandommap=0 00:31:28.717 numjobs=1 00:31:28.717 00:31:28.717 verify_dump=1 00:31:28.717 verify_backlog=512 00:31:28.717 verify_state_save=0 00:31:28.717 do_verify=1 00:31:28.717 verify=crc32c-intel 00:31:28.717 [job0] 00:31:28.717 filename=/dev/nvme0n1 00:31:28.717 Could not set queue depth (nvme0n1) 00:31:28.976 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:28.976 fio-3.35 00:31:28.976 Starting 1 thread 00:31:30.353 00:31:30.353 job0: (groupid=0, jobs=1): err= 0: pid=847263: Tue Dec 10 05:08:21 2024 00:31:30.353 read: IOPS=2178, BW=8716KiB/s (8925kB/s)(8916KiB/1023msec) 00:31:30.353 slat (nsec): min=6268, max=28199, avg=7222.93, stdev=1165.59 00:31:30.353 clat (usec): min=180, max=41192, avg=255.54, stdev=1498.29 00:31:30.353 lat (usec): min=187, max=41202, avg=262.77, stdev=1498.72 00:31:30.353 clat percentiles (usec): 00:31:30.353 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:31:30.353 | 30.00th=[ 192], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 194], 00:31:30.353 | 70.00th=[ 196], 80.00th=[ 198], 90.00th=[ 200], 95.00th=[ 208], 00:31:30.353 | 99.00th=[ 392], 99.50th=[ 396], 99.90th=[41157], 99.95th=[41157], 00:31:30.353 | 99.99th=[41157] 00:31:30.353 write: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1023msec); 0 zone resets 00:31:30.353 slat (nsec): min=9140, max=42584, avg=10127.93, stdev=1209.64 00:31:30.353 clat (usec): min=126, max=360, avg=156.52, stdev=43.51 00:31:30.353 lat (usec): min=136, max=402, avg=166.65, stdev=43.61 00:31:30.353 clat percentiles (usec): 00:31:30.353 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:30.353 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:31:30.353 | 70.00th=[ 139], 80.00th=[ 186], 90.00th=[ 243], 95.00th=[ 245], 00:31:30.353 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 262], 00:31:30.353 | 99.99th=[ 359] 00:31:30.353 bw ( KiB/s): min= 8360, max=12120, per=100.00%, avg=10240.00, stdev=2658.72, samples=2 00:31:30.353 iops : min= 2090, max= 3030, avg=2560.00, stdev=664.68, samples=2 00:31:30.353 lat (usec) : 250=97.70%, 500=2.17%, 750=0.04% 00:31:30.353 lat (msec) : 2=0.02%, 50=0.06% 00:31:30.353 cpu : usr=2.64%, sys=3.82%, ctx=4789, majf=0, minf=1 00:31:30.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.353 issued rwts: total=2229,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:30.353 00:31:30.353 Run status group 0 (all jobs): 00:31:30.353 READ: bw=8716KiB/s (8925kB/s), 8716KiB/s-8716KiB/s (8925kB/s-8925kB/s), io=8916KiB (9130kB), run=1023-1023msec 00:31:30.353 WRITE: bw=9.77MiB/s (10.2MB/s), 9.77MiB/s-9.77MiB/s (10.2MB/s-10.2MB/s), io=10.0MiB (10.5MB), run=1023-1023msec 00:31:30.353 00:31:30.353 Disk stats (read/write): 00:31:30.353 nvme0n1: ios=2273/2560, merge=0/0, ticks=463/381, in_queue=844, util=91.18% 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:30.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.353 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.353 rmmod nvme_tcp 00:31:30.353 rmmod nvme_fabrics 00:31:30.612 rmmod nvme_keyring 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 846604 ']' 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 846604 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 846604 ']' 00:31:30.612 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 846604 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846604 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846604' 00:31:30.613 killing process with pid 846604 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 846604 00:31:30.613 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 846604 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.872 05:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.778 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:32.778 00:31:32.778 real 0m13.211s 00:31:32.778 user 0m24.294s 00:31:32.778 sys 0m6.180s 00:31:32.778 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.778 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:32.778 ************************************ 00:31:32.778 END TEST nvmf_nmic 00:31:32.778 ************************************ 00:31:32.778 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:32.779 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:32.779 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.779 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:32.779 ************************************ 00:31:32.779 START TEST nvmf_fio_target 00:31:32.779 ************************************ 00:31:32.779 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:33.038 * Looking for test storage... 00:31:33.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.039 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:33.039 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:33.039 05:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:33.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.039 --rc genhtml_branch_coverage=1 00:31:33.039 --rc genhtml_function_coverage=1 00:31:33.039 --rc genhtml_legend=1 00:31:33.039 --rc geninfo_all_blocks=1 00:31:33.039 --rc geninfo_unexecuted_blocks=1 00:31:33.039 00:31:33.039 ' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:33.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.039 --rc genhtml_branch_coverage=1 00:31:33.039 --rc genhtml_function_coverage=1 00:31:33.039 --rc genhtml_legend=1 00:31:33.039 --rc geninfo_all_blocks=1 00:31:33.039 --rc geninfo_unexecuted_blocks=1 00:31:33.039 00:31:33.039 ' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:33.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.039 --rc genhtml_branch_coverage=1 00:31:33.039 --rc genhtml_function_coverage=1 00:31:33.039 --rc genhtml_legend=1 00:31:33.039 --rc geninfo_all_blocks=1 00:31:33.039 --rc geninfo_unexecuted_blocks=1 00:31:33.039 00:31:33.039 ' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:33.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.039 --rc genhtml_branch_coverage=1 00:31:33.039 --rc genhtml_function_coverage=1 00:31:33.039 --rc genhtml_legend=1 00:31:33.039 --rc geninfo_all_blocks=1 00:31:33.039 --rc geninfo_unexecuted_blocks=1 00:31:33.039 00:31:33.039 ' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.039 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.040 05:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:39.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:39.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:39.612 Found net devices under 0000:af:00.0: cvl_0_0 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:39.612 Found net devices under 0000:af:00.1: cvl_0_1 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.612 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:31:39.613 00:31:39.613 --- 10.0.0.2 ping statistics --- 00:31:39.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.613 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:31:39.613 00:31:39.613 --- 10.0.0.1 ping statistics --- 00:31:39.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.613 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.613 05:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=850925 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 850925 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 850925 ']' 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.613 [2024-12-10 05:08:30.065859] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.613 [2024-12-10 05:08:30.066857] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:31:39.613 [2024-12-10 05:08:30.066897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.613 [2024-12-10 05:08:30.149891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:39.613 [2024-12-10 05:08:30.193015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.613 [2024-12-10 05:08:30.193050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.613 [2024-12-10 05:08:30.193057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.613 [2024-12-10 05:08:30.193063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.613 [2024-12-10 05:08:30.193068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.613 [2024-12-10 05:08:30.194387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.613 [2024-12-10 05:08:30.194422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.613 [2024-12-10 05:08:30.194452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.613 [2024-12-10 05:08:30.194454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.613 [2024-12-10 05:08:30.264481] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.613 [2024-12-10 05:08:30.264599] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.613 [2024-12-10 05:08:30.265217] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:39.613 [2024-12-10 05:08:30.265368] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:39.613 [2024-12-10 05:08:30.265433] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:39.613 [2024-12-10 05:08:30.507280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.613 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:39.871 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:39.871 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:39.871 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:39.871 05:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:40.130 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:40.130 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:40.388 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:40.388 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:40.649 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:41.014 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:41.014 05:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:41.014 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:41.014 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:41.316 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:41.316 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:41.316 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:41.574 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:41.574 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.832 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:41.832 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:42.090 05:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.090 [2024-12-10 05:08:33.175144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.090 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:42.348 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:42.606 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:42.864 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:42.864 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:42.864 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:42.864 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:42.864 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:42.864 05:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:44.768 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:45.026 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:45.026 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:45.027 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:45.027 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:45.027 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:45.027 05:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:45.027 [global] 00:31:45.027 thread=1 00:31:45.027 invalidate=1 00:31:45.027 rw=write 00:31:45.027 time_based=1 00:31:45.027 runtime=1 00:31:45.027 ioengine=libaio 00:31:45.027 direct=1 00:31:45.027 bs=4096 00:31:45.027 iodepth=1 00:31:45.027 norandommap=0 00:31:45.027 numjobs=1 00:31:45.027 00:31:45.027 verify_dump=1 00:31:45.027 verify_backlog=512 00:31:45.027 verify_state_save=0 00:31:45.027 do_verify=1 00:31:45.027 verify=crc32c-intel 00:31:45.027 [job0] 00:31:45.027 filename=/dev/nvme0n1 00:31:45.027 [job1] 00:31:45.027 filename=/dev/nvme0n2 00:31:45.027 [job2] 00:31:45.027 filename=/dev/nvme0n3 00:31:45.027 [job3] 00:31:45.027 filename=/dev/nvme0n4 00:31:45.027 Could not set queue depth (nvme0n1) 00:31:45.027 Could not set queue depth (nvme0n2) 00:31:45.027 Could not set queue depth (nvme0n3) 00:31:45.027 Could not set queue depth (nvme0n4) 00:31:45.285 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.285 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.285 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.285 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.285 fio-3.35 00:31:45.285 Starting 4 threads 00:31:46.661 00:31:46.661 job0: (groupid=0, jobs=1): err= 0: pid=852210: Tue Dec 10 05:08:37 2024 00:31:46.661 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:31:46.661 slat (nsec): min=9379, max=23024, avg=20341.13, stdev=2775.23 00:31:46.661 clat (usec): min=40678, max=41077, avg=40953.86, stdev=78.56 00:31:46.661 lat (usec): min=40688, max=41097, avg=40974.20, stdev=80.15 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:46.661 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.661 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.661 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:46.661 | 99.99th=[41157] 00:31:46.661 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:31:46.661 slat (nsec): min=9707, max=39450, avg=10913.15, stdev=1816.58 00:31:46.661 clat (usec): min=122, max=305, avg=171.25, stdev=32.76 00:31:46.661 lat (usec): min=133, max=317, avg=182.16, stdev=32.95 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 143], 00:31:46.661 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:31:46.661 | 70.00th=[ 190], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 229], 00:31:46.661 | 99.00th=[ 245], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 306], 00:31:46.661 | 99.99th=[ 306] 00:31:46.661 bw ( KiB/s): min= 4096, max= 4096, per=23.04%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.661 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.661 lat (usec) : 250=94.77%, 500=0.93% 00:31:46.661 lat (msec) : 50=4.30% 00:31:46.661 cpu : usr=0.87%, sys=0.29%, ctx=535, majf=0, minf=2 00:31:46.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.661 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.661 job1: (groupid=0, jobs=1): err= 0: pid=852211: Tue Dec 10 05:08:37 2024 00:31:46.661 read: IOPS=360, BW=1442KiB/s (1476kB/s)(1456KiB/1010msec) 00:31:46.661 slat (nsec): min=6702, max=46886, avg=10786.78, stdev=5794.93 00:31:46.661 clat (usec): min=195, max=41974, avg=2495.53, stdev=9311.12 00:31:46.661 lat (usec): min=216, max=41982, avg=2506.32, stdev=9310.90 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 231], 00:31:46.661 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:31:46.661 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 310], 95.00th=[40633], 00:31:46.661 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:46.661 | 99.99th=[42206] 00:31:46.661 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:46.661 slat (nsec): min=8945, max=37694, avg=10561.52, stdev=1942.58 00:31:46.661 clat (usec): min=123, max=321, avg=174.44, stdev=34.76 00:31:46.661 lat (usec): min=132, max=358, avg=185.00, stdev=35.21 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 147], 00:31:46.661 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:31:46.661 | 70.00th=[ 198], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 235], 00:31:46.661 | 99.00th=[ 269], 99.50th=[ 314], 99.90th=[ 322], 99.95th=[ 322], 00:31:46.661 | 99.99th=[ 322] 00:31:46.661 bw ( KiB/s): min= 4096, max= 4096, per=23.04%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.661 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.661 lat (usec) : 250=84.47%, 500=12.67%, 750=0.46%, 1000=0.11% 00:31:46.661 lat (msec) : 50=2.28% 00:31:46.661 cpu : usr=0.59%, sys=0.99%, ctx=876, majf=0, minf=1 00:31:46.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.661 issued rwts: total=364,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.661 job2: (groupid=0, jobs=1): err= 0: pid=852212: Tue Dec 10 05:08:37 2024 00:31:46.661 read: IOPS=862, BW=3449KiB/s (3532kB/s)(3480KiB/1009msec) 00:31:46.661 slat (nsec): min=6789, max=26139, avg=7745.93, stdev=2004.48 00:31:46.661 clat (usec): min=211, max=41087, avg=849.95, stdev=4935.41 00:31:46.661 lat (usec): min=219, max=41095, avg=857.70, stdev=4936.90 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 219], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:31:46.661 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:31:46.661 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 289], 00:31:46.661 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:46.661 | 99.99th=[41157] 00:31:46.661 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:31:46.661 slat (usec): min=9, max=40715, avg=68.82, stdev=1397.20 00:31:46.661 clat (usec): min=136, max=393, avg=183.22, stdev=35.89 00:31:46.661 lat (usec): min=147, max=41109, avg=252.05, stdev=1405.31 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:31:46.661 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:31:46.661 | 70.00th=[ 204], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 243], 00:31:46.661 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 318], 99.95th=[ 396], 00:31:46.661 | 99.99th=[ 396] 00:31:46.661 bw ( KiB/s): min= 8192, max= 8192, per=46.09%, avg=8192.00, stdev= 0.00, samples=1 00:31:46.661 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:46.661 lat (usec) : 250=89.33%, 500=9.98% 00:31:46.661 lat (msec) : 50=0.69% 00:31:46.661 cpu : usr=0.89%, sys=1.88%, ctx=1897, majf=0, minf=1 00:31:46.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.661 issued rwts: total=870,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.661 job3: (groupid=0, jobs=1): err= 0: pid=852213: Tue Dec 10 05:08:37 2024 00:31:46.661 read: IOPS=2232, BW=8931KiB/s (9145kB/s)(8940KiB/1001msec) 00:31:46.661 slat (nsec): min=6808, max=27475, avg=7649.37, stdev=835.78 00:31:46.661 clat (usec): min=195, max=367, avg=225.77, stdev=17.74 00:31:46.661 lat (usec): min=202, max=374, avg=233.42, stdev=17.76 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:31:46.661 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 231], 00:31:46.661 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 255], 00:31:46.661 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 314], 00:31:46.661 | 99.99th=[ 367] 00:31:46.661 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:46.661 slat (usec): min=9, max=18533, avg=18.19, stdev=366.09 00:31:46.661 clat (usec): min=135, max=814, avg=164.47, stdev=28.07 00:31:46.661 lat (usec): min=148, max=18893, avg=182.65, stdev=370.99 00:31:46.661 clat percentiles (usec): 00:31:46.661 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 145], 20.00th=[ 149], 00:31:46.661 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 167], 00:31:46.661 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:31:46.661 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 494], 99.95th=[ 529], 00:31:46.661 | 99.99th=[ 816] 00:31:46.661 bw ( KiB/s): min= 9864, max= 9864, per=55.50%, avg=9864.00, stdev= 0.00, samples=1 00:31:46.661 iops : min= 2466, max= 2466, avg=2466.00, stdev= 0.00, samples=1 00:31:46.661 lat (usec) : 250=94.72%, 500=5.23%, 750=0.02%, 1000=0.02% 00:31:46.661 cpu : usr=1.90%, sys=5.20%, ctx=4797, majf=0, minf=1 00:31:46.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.662 issued rwts: total=2235,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.662 00:31:46.662 Run status group 0 (all jobs): 00:31:46.662 READ: bw=13.2MiB/s (13.8MB/s), 88.7KiB/s-8931KiB/s (90.8kB/s-9145kB/s), io=13.6MiB (14.3MB), run=1001-1037msec 00:31:46.662 WRITE: bw=17.4MiB/s (18.2MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1037msec 00:31:46.662 00:31:46.662 Disk stats (read/write): 00:31:46.662 nvme0n1: ios=67/512, merge=0/0, ticks=720/84, in_queue=804, util=82.06% 00:31:46.662 nvme0n2: ios=407/512, merge=0/0, ticks=752/86, in_queue=838, util=85.91% 00:31:46.662 nvme0n3: ios=923/1024, merge=0/0, ticks=1381/177, in_queue=1558, util=95.55% 00:31:46.662 nvme0n4: ios=1797/2048, merge=0/0, ticks=1245/336, in_queue=1581, util=100.00% 00:31:46.662 05:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:46.662 [global] 00:31:46.662 thread=1 00:31:46.662 invalidate=1 00:31:46.662 rw=randwrite 00:31:46.662 time_based=1 00:31:46.662 runtime=1 00:31:46.662 ioengine=libaio 00:31:46.662 direct=1 00:31:46.662 bs=4096 00:31:46.662 iodepth=1 00:31:46.662 norandommap=0 00:31:46.662 numjobs=1 00:31:46.662 00:31:46.662 verify_dump=1 00:31:46.662 verify_backlog=512 00:31:46.662 verify_state_save=0 00:31:46.662 do_verify=1 00:31:46.662 verify=crc32c-intel 00:31:46.662 [job0] 00:31:46.662 filename=/dev/nvme0n1 00:31:46.662 [job1] 00:31:46.662 filename=/dev/nvme0n2 00:31:46.662 [job2] 00:31:46.662 filename=/dev/nvme0n3 00:31:46.662 [job3] 00:31:46.662 filename=/dev/nvme0n4 00:31:46.662 Could not set queue depth (nvme0n1) 00:31:46.662 Could not set queue depth (nvme0n2) 00:31:46.662 Could not set queue depth (nvme0n3) 00:31:46.662 Could not set queue depth (nvme0n4) 00:31:46.919 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.919 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.919 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.919 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.919 fio-3.35 00:31:46.919 Starting 4 threads 00:31:48.296 00:31:48.296 job0: (groupid=0, jobs=1): err= 0: pid=852581: Tue Dec 10 05:08:39 2024 00:31:48.296 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:48.296 slat (nsec): min=6477, max=14926, avg=7249.58, stdev=620.92 00:31:48.296 clat (usec): min=163, max=506, avg=267.43, stdev=45.82 00:31:48.296 lat (usec): min=171, max=513, avg=274.68, stdev=45.79 00:31:48.296 clat percentiles (usec): 00:31:48.296 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 237], 00:31:48.296 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 269], 00:31:48.296 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 351], 00:31:48.296 | 99.00th=[ 445], 99.50th=[ 494], 99.90th=[ 506], 99.95th=[ 506], 00:31:48.296 | 99.99th=[ 506] 00:31:48.296 write: IOPS=2207, BW=8831KiB/s (9043kB/s)(8840KiB/1001msec); 0 zone resets 00:31:48.296 slat (nsec): min=9330, max=40350, avg=10191.10, stdev=1056.02 00:31:48.296 clat (usec): min=129, max=1254, avg=183.68, stdev=41.13 00:31:48.296 lat (usec): min=140, max=1263, avg=193.87, stdev=41.27 00:31:48.296 clat percentiles (usec): 00:31:48.296 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:31:48.296 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 178], 00:31:48.296 | 70.00th=[ 192], 80.00th=[ 212], 90.00th=[ 241], 95.00th=[ 255], 00:31:48.296 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 367], 99.95th=[ 375], 00:31:48.296 | 99.99th=[ 1254] 00:31:48.296 bw ( KiB/s): min= 8192, max= 8192, per=29.24%, avg=8192.00, stdev= 0.00, samples=1 00:31:48.296 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:48.296 lat (usec) : 250=69.14%, 500=30.77%, 750=0.07% 00:31:48.296 lat (msec) : 2=0.02% 00:31:48.296 cpu : usr=2.10%, sys=3.80%, ctx=4259, majf=0, minf=1 00:31:48.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 issued rwts: total=2048,2210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.297 job1: (groupid=0, jobs=1): err= 0: pid=852582: Tue Dec 10 05:08:39 2024 00:31:48.297 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:48.297 slat (nsec): min=7137, max=26523, avg=8707.48, stdev=1783.60 00:31:48.297 clat (usec): min=171, max=650, avg=258.27, stdev=46.21 00:31:48.297 lat (usec): min=179, max=658, avg=266.98, stdev=46.06 00:31:48.297 clat percentiles (usec): 00:31:48.297 | 1.00th=[ 204], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 235], 00:31:48.297 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:31:48.297 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 310], 95.00th=[ 355], 00:31:48.297 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 529], 99.95th=[ 562], 00:31:48.297 | 99.99th=[ 652] 00:31:48.297 write: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec); 0 zone resets 00:31:48.297 slat (nsec): min=10176, max=41215, avg=11836.63, stdev=1946.04 00:31:48.297 clat (usec): min=135, max=475, avg=178.46, stdev=32.55 00:31:48.297 lat (usec): min=146, max=487, avg=190.30, stdev=32.61 00:31:48.297 clat percentiles (usec): 00:31:48.297 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:31:48.297 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:31:48.297 | 70.00th=[ 176], 80.00th=[ 198], 90.00th=[ 235], 95.00th=[ 251], 00:31:48.297 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 338], 99.95th=[ 343], 00:31:48.297 | 99.99th=[ 474] 00:31:48.297 bw ( KiB/s): min= 8192, max= 8192, per=29.24%, avg=8192.00, stdev= 0.00, samples=1 00:31:48.297 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:48.297 lat (usec) : 250=80.64%, 500=19.06%, 750=0.30% 00:31:48.297 cpu : usr=4.20%, sys=6.70%, ctx=4360, majf=0, minf=1 00:31:48.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 issued rwts: total=2048,2311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.297 job2: (groupid=0, jobs=1): err= 0: pid=852583: Tue Dec 10 05:08:39 2024 00:31:48.297 read: IOPS=108, BW=435KiB/s (446kB/s)(440KiB/1011msec) 00:31:48.297 slat (nsec): min=6877, max=23841, avg=8351.27, stdev=1985.86 00:31:48.297 clat (usec): min=222, max=41143, avg=8377.61, stdev=16349.44 00:31:48.297 lat (usec): min=229, max=41154, avg=8385.96, stdev=16350.43 00:31:48.297 clat percentiles (usec): 00:31:48.297 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 233], 00:31:48.297 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:31:48.297 | 70.00th=[ 251], 80.00th=[ 281], 90.00th=[41157], 95.00th=[41157], 00:31:48.297 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.297 | 99.99th=[41157] 00:31:48.297 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:31:48.297 slat (nsec): min=6981, max=30148, avg=8562.53, stdev=2377.63 00:31:48.297 clat (usec): min=143, max=289, avg=161.24, stdev=11.98 00:31:48.297 lat (usec): min=150, max=319, avg=169.80, stdev=13.50 00:31:48.297 clat percentiles (usec): 00:31:48.297 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:31:48.297 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:31:48.297 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:31:48.297 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 289], 99.95th=[ 289], 00:31:48.297 | 99.99th=[ 289] 00:31:48.297 bw ( KiB/s): min= 4096, max= 4096, per=14.62%, avg=4096.00, stdev= 0.00, samples=1 00:31:48.297 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:48.297 lat (usec) : 250=93.73%, 500=2.73% 00:31:48.297 lat (msec) : 50=3.54% 00:31:48.297 cpu : usr=0.10%, sys=0.69%, ctx=625, majf=0, minf=1 00:31:48.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 issued rwts: total=110,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.297 job3: (groupid=0, jobs=1): err= 0: pid=852584: Tue Dec 10 05:08:39 2024 00:31:48.297 read: IOPS=1583, BW=6334KiB/s (6486kB/s)(6340KiB/1001msec) 00:31:48.297 slat (nsec): min=7414, max=60336, avg=10215.97, stdev=2371.14 00:31:48.297 clat (usec): min=180, max=605, avg=322.90, stdev=66.74 00:31:48.297 lat (usec): min=189, max=615, avg=333.12, stdev=66.66 00:31:48.297 clat percentiles (usec): 00:31:48.297 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 255], 00:31:48.297 | 30.00th=[ 285], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 330], 00:31:48.297 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 429], 95.00th=[ 465], 00:31:48.297 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 603], 00:31:48.297 | 99.99th=[ 603] 00:31:48.297 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:48.297 slat (nsec): min=9773, max=67514, avg=13246.14, stdev=2515.19 00:31:48.297 clat (usec): min=139, max=1085, avg=211.43, stdev=41.82 00:31:48.297 lat (usec): min=149, max=1103, avg=224.68, stdev=42.31 00:31:48.297 clat percentiles (usec): 00:31:48.297 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 176], 00:31:48.297 | 30.00th=[ 184], 40.00th=[ 196], 50.00th=[ 212], 60.00th=[ 225], 00:31:48.297 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 269], 00:31:48.297 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 379], 00:31:48.297 | 99.99th=[ 1090] 00:31:48.297 bw ( KiB/s): min= 8192, max= 8192, per=29.24%, avg=8192.00, stdev= 0.00, samples=1 00:31:48.297 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:48.297 lat (usec) : 250=57.14%, 500=42.03%, 750=0.80% 00:31:48.297 lat (msec) : 2=0.03% 00:31:48.297 cpu : usr=4.20%, sys=5.90%, ctx=3634, majf=0, minf=2 00:31:48.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.297 issued rwts: total=1585,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.297 00:31:48.297 Run status group 0 (all jobs): 00:31:48.297 READ: bw=22.4MiB/s (23.5MB/s), 435KiB/s-8184KiB/s (446kB/s-8380kB/s), io=22.6MiB (23.7MB), run=1001-1011msec 00:31:48.297 WRITE: bw=27.4MiB/s (28.7MB/s), 2026KiB/s-9235KiB/s (2074kB/s-9456kB/s), io=27.7MiB (29.0MB), run=1001-1011msec 00:31:48.297 00:31:48.297 Disk stats (read/write): 00:31:48.297 nvme0n1: ios=1566/1881, merge=0/0, ticks=1316/343, in_queue=1659, util=89.58% 00:31:48.297 nvme0n2: ios=1560/1995, merge=0/0, ticks=1364/352, in_queue=1716, util=100.00% 00:31:48.297 nvme0n3: ios=163/512, merge=0/0, ticks=957/81, in_queue=1038, util=96.86% 00:31:48.297 nvme0n4: ios=1364/1536, merge=0/0, ticks=428/306, in_queue=734, util=89.76% 00:31:48.297 05:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:48.297 [global] 00:31:48.297 thread=1 00:31:48.297 invalidate=1 00:31:48.297 rw=write 00:31:48.297 time_based=1 00:31:48.297 runtime=1 00:31:48.297 ioengine=libaio 00:31:48.297 direct=1 00:31:48.297 bs=4096 00:31:48.297 iodepth=128 00:31:48.297 norandommap=0 00:31:48.297 numjobs=1 00:31:48.297 00:31:48.297 verify_dump=1 00:31:48.297 verify_backlog=512 00:31:48.297 verify_state_save=0 00:31:48.297 do_verify=1 00:31:48.297 verify=crc32c-intel 00:31:48.297 [job0] 00:31:48.297 filename=/dev/nvme0n1 00:31:48.297 [job1] 00:31:48.297 filename=/dev/nvme0n2 00:31:48.297 [job2] 00:31:48.297 filename=/dev/nvme0n3 00:31:48.297 [job3] 00:31:48.297 filename=/dev/nvme0n4 00:31:48.297 Could not set queue depth (nvme0n1) 00:31:48.297 Could not set queue depth (nvme0n2) 00:31:48.297 Could not set queue depth (nvme0n3) 00:31:48.297 Could not set queue depth (nvme0n4) 00:31:48.561 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.561 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.561 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.561 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.561 fio-3.35 00:31:48.561 Starting 4 threads 00:31:49.952 00:31:49.952 job0: (groupid=0, jobs=1): err= 0: pid=852951: Tue Dec 10 05:08:40 2024 00:31:49.952 read: IOPS=4307, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1008msec) 00:31:49.952 slat (nsec): min=1091, max=18249k, avg=100937.80, stdev=785809.18 00:31:49.952 clat (usec): min=2478, max=81273, avg=12705.68, stdev=9291.05 00:31:49.952 lat (usec): min=2484, max=81278, avg=12806.62, stdev=9379.45 00:31:49.952 clat percentiles (usec): 00:31:49.952 | 1.00th=[ 3032], 5.00th=[ 5604], 10.00th=[ 7242], 20.00th=[ 8291], 00:31:49.952 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11469], 00:31:49.952 | 70.00th=[13173], 80.00th=[15008], 90.00th=[17695], 95.00th=[20579], 00:31:49.952 | 99.00th=[67634], 99.50th=[73925], 99.90th=[81265], 99.95th=[81265], 00:31:49.952 | 99.99th=[81265] 00:31:49.952 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:31:49.952 slat (usec): min=2, max=14738, avg=102.49, stdev=612.54 00:31:49.952 clat (usec): min=318, max=81275, avg=15770.80, stdev=13650.46 00:31:49.952 lat (usec): min=369, max=81284, avg=15873.29, stdev=13742.73 00:31:49.952 clat percentiles (usec): 00:31:49.952 | 1.00th=[ 3589], 5.00th=[ 5080], 10.00th=[ 6063], 20.00th=[ 8094], 00:31:49.952 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:31:49.952 | 70.00th=[14746], 80.00th=[20579], 90.00th=[45351], 95.00th=[50594], 00:31:49.952 | 99.00th=[54789], 99.50th=[61604], 99.90th=[67634], 99.95th=[67634], 00:31:49.952 | 99.99th=[81265] 00:31:49.952 bw ( KiB/s): min=14160, max=22704, per=27.17%, avg=18432.00, stdev=6041.52, samples=2 00:31:49.952 iops : min= 3540, max= 5676, avg=4608.00, stdev=1510.38, samples=2 00:31:49.952 lat (usec) : 500=0.06% 00:31:49.952 lat (msec) : 2=0.19%, 4=1.50%, 10=47.72%, 20=36.42%, 50=9.63% 00:31:49.952 lat (msec) : 100=4.48% 00:31:49.952 cpu : usr=4.07%, sys=4.67%, ctx=402, majf=0, minf=2 00:31:49.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:49.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.952 issued rwts: total=4342,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.952 job1: (groupid=0, jobs=1): err= 0: pid=852953: Tue Dec 10 05:08:40 2024 00:31:49.952 read: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(15.4MiB/1046msec) 00:31:49.952 slat (nsec): min=1326, max=18916k, avg=112089.63, stdev=952756.92 00:31:49.952 clat (usec): min=5091, max=57707, avg=15472.48, stdev=9723.44 00:31:49.952 lat (usec): min=5100, max=57714, avg=15584.57, stdev=9785.31 00:31:49.952 clat percentiles (usec): 00:31:49.952 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 8455], 20.00th=[ 9241], 00:31:49.952 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[12518], 60.00th=[13960], 00:31:49.952 | 70.00th=[16319], 80.00th=[19268], 90.00th=[30016], 95.00th=[32113], 00:31:49.952 | 99.00th=[50594], 99.50th=[54789], 99.90th=[57410], 99.95th=[57934], 00:31:49.952 | 99.99th=[57934] 00:31:49.952 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:31:49.952 slat (usec): min=2, max=37255, avg=129.24, stdev=1041.13 00:31:49.952 clat (usec): min=250, max=76880, avg=16366.78, stdev=14410.87 00:31:49.952 lat (usec): min=514, max=76890, avg=16496.02, stdev=14526.30 00:31:49.952 clat percentiles (usec): 00:31:49.952 | 1.00th=[ 734], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 8586], 00:31:49.952 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[12911], 00:31:49.952 | 70.00th=[15401], 80.00th=[17957], 90.00th=[49546], 95.00th=[51643], 00:31:49.952 | 99.00th=[65799], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:31:49.952 | 99.99th=[77071] 00:31:49.952 bw ( KiB/s): min=10800, max=21968, per=24.15%, avg=16384.00, stdev=7896.97, samples=2 00:31:49.952 iops : min= 2700, max= 5492, avg=4096.00, stdev=1974.24, samples=2 00:31:49.952 lat (usec) : 500=0.02%, 750=0.96%, 1000=0.20% 00:31:49.952 lat (msec) : 4=0.06%, 10=39.93%, 20=42.04%, 50=10.73%, 100=6.05% 00:31:49.952 cpu : usr=3.73%, sys=4.69%, ctx=319, majf=0, minf=1 00:31:49.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:49.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.952 issued rwts: total=3936,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.952 job2: (groupid=0, jobs=1): err= 0: pid=852954: Tue Dec 10 05:08:40 2024 00:31:49.953 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:31:49.953 slat (nsec): min=1363, max=20243k, avg=118282.53, stdev=910975.74 00:31:49.953 clat (usec): min=8287, max=59220, avg=16689.07, stdev=10038.31 00:31:49.953 lat (usec): min=8293, max=59444, avg=16807.35, stdev=10111.09 00:31:49.953 clat percentiles (usec): 00:31:49.953 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11207], 00:31:49.953 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12387], 60.00th=[12780], 00:31:49.953 | 70.00th=[13829], 80.00th=[23462], 90.00th=[31589], 95.00th=[43779], 00:31:49.953 | 99.00th=[49021], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:31:49.953 | 99.99th=[58983] 00:31:49.953 write: IOPS=4176, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1003msec); 0 zone resets 00:31:49.953 slat (usec): min=2, max=20579, avg=114.53, stdev=805.12 00:31:49.953 clat (usec): min=404, max=52787, avg=14002.72, stdev=7731.89 00:31:49.953 lat (usec): min=3776, max=57601, avg=14117.25, stdev=7813.28 00:31:49.953 clat percentiles (usec): 00:31:49.953 | 1.00th=[ 4424], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10814], 00:31:49.953 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:31:49.953 | 70.00th=[12649], 80.00th=[13173], 90.00th=[22152], 95.00th=[35390], 00:31:49.953 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[52691], 00:31:49.953 | 99.99th=[52691] 00:31:49.953 bw ( KiB/s): min=16352, max=16440, per=24.17%, avg=16396.00, stdev=62.23, samples=2 00:31:49.953 iops : min= 4088, max= 4110, avg=4099.00, stdev=15.56, samples=2 00:31:49.953 lat (usec) : 500=0.01% 00:31:49.953 lat (msec) : 4=0.14%, 10=8.98%, 20=75.09%, 50=15.06%, 100=0.71% 00:31:49.953 cpu : usr=3.69%, sys=5.59%, ctx=305, majf=0, minf=1 00:31:49.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:49.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.953 issued rwts: total=4096,4189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.953 job3: (groupid=0, jobs=1): err= 0: pid=852955: Tue Dec 10 05:08:40 2024 00:31:49.953 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:31:49.953 slat (nsec): min=1378, max=23944k, avg=97941.54, stdev=737036.06 00:31:49.953 clat (usec): min=2539, max=44970, avg=13062.70, stdev=6513.64 00:31:49.953 lat (usec): min=2547, max=44981, avg=13160.65, stdev=6565.34 00:31:49.953 clat percentiles (usec): 00:31:49.953 | 1.00th=[ 5604], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9110], 00:31:49.953 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[11338], 00:31:49.953 | 70.00th=[13435], 80.00th=[15270], 90.00th=[21627], 95.00th=[30016], 00:31:49.953 | 99.00th=[33424], 99.50th=[41681], 99.90th=[42206], 99.95th=[44827], 00:31:49.953 | 99.99th=[44827] 00:31:49.953 write: IOPS=4810, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1008msec); 0 zone resets 00:31:49.953 slat (usec): min=2, max=18777, avg=103.41, stdev=754.10 00:31:49.953 clat (usec): min=3859, max=55121, avg=13944.49, stdev=9207.26 00:31:49.953 lat (usec): min=3994, max=55133, avg=14047.90, stdev=9259.95 00:31:49.953 clat percentiles (usec): 00:31:49.953 | 1.00th=[ 4424], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8717], 00:31:49.953 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10814], 00:31:49.953 | 70.00th=[11731], 80.00th=[19006], 90.00th=[28443], 95.00th=[32375], 00:31:49.953 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:31:49.953 | 99.99th=[55313] 00:31:49.953 bw ( KiB/s): min=17288, max=20480, per=27.83%, avg=18884.00, stdev=2257.08, samples=2 00:31:49.953 iops : min= 4322, max= 5120, avg=4721.00, stdev=564.27, samples=2 00:31:49.953 lat (msec) : 4=0.26%, 10=45.11%, 20=38.36%, 50=15.51%, 100=0.75% 00:31:49.953 cpu : usr=2.58%, sys=6.06%, ctx=365, majf=0, minf=1 00:31:49.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:49.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.953 issued rwts: total=4608,4849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.953 00:31:49.953 Run status group 0 (all jobs): 00:31:49.953 READ: bw=63.4MiB/s (66.5MB/s), 14.7MiB/s-17.9MiB/s (15.4MB/s-18.7MB/s), io=66.3MiB (69.6MB), run=1003-1046msec 00:31:49.953 WRITE: bw=66.3MiB/s (69.5MB/s), 15.3MiB/s-18.8MiB/s (16.0MB/s-19.7MB/s), io=69.3MiB (72.7MB), run=1003-1046msec 00:31:49.953 00:31:49.953 Disk stats (read/write): 00:31:49.953 nvme0n1: ios=3628/3767, merge=0/0, ticks=43935/59489, in_queue=103424, util=90.58% 00:31:49.953 nvme0n2: ios=3026/3072, merge=0/0, ticks=45605/56329, in_queue=101934, util=95.74% 00:31:49.953 nvme0n3: ios=3599/3584, merge=0/0, ticks=26619/20800, in_queue=47419, util=99.07% 00:31:49.953 nvme0n4: ios=4116/4455, merge=0/0, ticks=27442/34332, in_queue=61774, util=99.90% 00:31:49.953 05:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:49.953 [global] 00:31:49.953 thread=1 00:31:49.953 invalidate=1 00:31:49.953 rw=randwrite 00:31:49.953 time_based=1 00:31:49.953 runtime=1 00:31:49.953 ioengine=libaio 00:31:49.953 direct=1 00:31:49.953 bs=4096 00:31:49.953 iodepth=128 00:31:49.953 norandommap=0 00:31:49.953 numjobs=1 00:31:49.953 00:31:49.953 verify_dump=1 00:31:49.953 verify_backlog=512 00:31:49.953 verify_state_save=0 00:31:49.953 do_verify=1 00:31:49.953 verify=crc32c-intel 00:31:49.953 [job0] 00:31:49.953 filename=/dev/nvme0n1 00:31:49.953 [job1] 00:31:49.953 filename=/dev/nvme0n2 00:31:49.953 [job2] 00:31:49.953 filename=/dev/nvme0n3 00:31:49.953 [job3] 00:31:49.953 filename=/dev/nvme0n4 00:31:49.953 Could not set queue depth (nvme0n1) 00:31:49.953 Could not set queue depth (nvme0n2) 00:31:49.953 Could not set queue depth (nvme0n3) 00:31:49.953 Could not set queue depth (nvme0n4) 00:31:50.210 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.211 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.211 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.211 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.211 fio-3.35 00:31:50.211 Starting 4 threads 00:31:51.579 00:31:51.579 job0: (groupid=0, jobs=1): err= 0: pid=853314: Tue Dec 10 05:08:42 2024 00:31:51.579 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:31:51.579 slat (usec): min=2, max=11708, avg=145.84, stdev=891.70 00:31:51.579 clat (usec): min=9981, max=46301, avg=19442.36, stdev=5693.19 00:31:51.579 lat (usec): min=9988, max=46308, avg=19588.20, stdev=5766.42 00:31:51.579 clat percentiles (usec): 00:31:51.579 | 1.00th=[11994], 5.00th=[12649], 10.00th=[13435], 20.00th=[14091], 00:31:51.579 | 30.00th=[15270], 40.00th=[17171], 50.00th=[19006], 60.00th=[20055], 00:31:51.579 | 70.00th=[20841], 80.00th=[22676], 90.00th=[28705], 95.00th=[29754], 00:31:51.579 | 99.00th=[36963], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:51.579 | 99.99th=[46400] 00:31:51.579 write: IOPS=3275, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1007msec); 0 zone resets 00:31:51.579 slat (usec): min=3, max=17932, avg=161.94, stdev=976.42 00:31:51.579 clat (usec): min=779, max=41931, avg=20609.00, stdev=9073.38 00:31:51.579 lat (usec): min=5496, max=41973, avg=20770.94, stdev=9164.98 00:31:51.579 clat percentiles (usec): 00:31:51.579 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11076], 00:31:51.579 | 30.00th=[13566], 40.00th=[15664], 50.00th=[20055], 60.00th=[20579], 00:31:51.579 | 70.00th=[27132], 80.00th=[29230], 90.00th=[32900], 95.00th=[38011], 00:31:51.579 | 99.00th=[39060], 99.50th=[39060], 99.90th=[40633], 99.95th=[40633], 00:31:51.579 | 99.99th=[41681] 00:31:51.579 bw ( KiB/s): min=12224, max=13109, per=17.75%, avg=12666.50, stdev=625.79, samples=2 00:31:51.579 iops : min= 3056, max= 3277, avg=3166.50, stdev=156.27, samples=2 00:31:51.579 lat (usec) : 1000=0.02% 00:31:51.579 lat (msec) : 10=5.38%, 20=49.26%, 50=45.34% 00:31:51.579 cpu : usr=2.49%, sys=5.07%, ctx=230, majf=0, minf=1 00:31:51.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:31:51.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.579 issued rwts: total=3072,3298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.579 job1: (groupid=0, jobs=1): err= 0: pid=853315: Tue Dec 10 05:08:42 2024 00:31:51.579 read: IOPS=6992, BW=27.3MiB/s (28.6MB/s)(27.5MiB/1007msec) 00:31:51.579 slat (nsec): min=1284, max=9246.8k, avg=69875.63, stdev=588422.89 00:31:51.579 clat (usec): min=1437, max=22467, avg=9277.11, stdev=2494.85 00:31:51.579 lat (usec): min=4949, max=22478, avg=9346.99, stdev=2540.46 00:31:51.579 clat percentiles (usec): 00:31:51.579 | 1.00th=[ 5538], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 7111], 00:31:51.579 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:31:51.579 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[12780], 95.00th=[14222], 00:31:51.579 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:31:51.579 | 99.99th=[22414] 00:31:51.579 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:31:51.579 slat (nsec): min=1811, max=16127k, avg=64757.87, stdev=557205.46 00:31:51.579 clat (usec): min=946, max=33571, avg=8706.31, stdev=2930.12 00:31:51.580 lat (usec): min=972, max=33591, avg=8771.07, stdev=2964.52 00:31:51.580 clat percentiles (usec): 00:31:51.580 | 1.00th=[ 3458], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6521], 00:31:51.580 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 9241], 00:31:51.580 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[12780], 95.00th=[14222], 00:31:51.580 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18220], 99.95th=[18744], 00:31:51.580 | 99.99th=[33817] 00:31:51.580 bw ( KiB/s): min=25600, max=31680, per=40.13%, avg=28640.00, stdev=4299.21, samples=2 00:31:51.580 iops : min= 6400, max= 7920, avg=7160.00, stdev=1074.80, samples=2 00:31:51.580 lat (usec) : 1000=0.02% 00:31:51.580 lat (msec) : 2=0.11%, 4=0.54%, 10=73.86%, 20=25.43%, 50=0.03% 00:31:51.580 cpu : usr=6.06%, sys=7.85%, ctx=348, majf=0, minf=2 00:31:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.580 issued rwts: total=7041,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.580 job2: (groupid=0, jobs=1): err= 0: pid=853316: Tue Dec 10 05:08:42 2024 00:31:51.580 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:31:51.580 slat (nsec): min=1903, max=21060k, avg=189986.84, stdev=1103623.54 00:31:51.580 clat (usec): min=13350, max=74595, avg=23207.71, stdev=9305.50 00:31:51.580 lat (usec): min=13355, max=74620, avg=23397.69, stdev=9405.96 00:31:51.580 clat percentiles (usec): 00:31:51.580 | 1.00th=[13960], 5.00th=[14615], 10.00th=[15795], 20.00th=[16712], 00:31:51.580 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20841], 60.00th=[21365], 00:31:51.580 | 70.00th=[23462], 80.00th=[26608], 90.00th=[35914], 95.00th=[43254], 00:31:51.580 | 99.00th=[63177], 99.50th=[63177], 99.90th=[72877], 99.95th=[72877], 00:31:51.580 | 99.99th=[74974] 00:31:51.580 write: IOPS=2788, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1007msec); 0 zone resets 00:31:51.580 slat (usec): min=2, max=15174, avg=171.87, stdev=1069.29 00:31:51.580 clat (usec): min=1116, max=75463, avg=24325.71, stdev=10999.54 00:31:51.580 lat (usec): min=1155, max=75486, avg=24497.58, stdev=11101.37 00:31:51.580 clat percentiles (usec): 00:31:51.580 | 1.00th=[10290], 5.00th=[12780], 10.00th=[14353], 20.00th=[14877], 00:31:51.580 | 30.00th=[17957], 40.00th=[19530], 50.00th=[20579], 60.00th=[23725], 00:31:51.580 | 70.00th=[27657], 80.00th=[31327], 90.00th=[43254], 95.00th=[44827], 00:31:51.580 | 99.00th=[57410], 99.50th=[63177], 99.90th=[68682], 99.95th=[72877], 00:31:51.580 | 99.99th=[74974] 00:31:51.580 bw ( KiB/s): min= 9061, max=12360, per=15.00%, avg=10710.50, stdev=2332.75, samples=2 00:31:51.580 iops : min= 2265, max= 3090, avg=2677.50, stdev=583.36, samples=2 00:31:51.580 lat (msec) : 2=0.02%, 4=0.02%, 10=0.15%, 20=42.51%, 50=54.36% 00:31:51.580 lat (msec) : 100=2.94% 00:31:51.580 cpu : usr=2.98%, sys=3.58%, ctx=227, majf=0, minf=2 00:31:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.580 issued rwts: total=2560,2808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.580 job3: (groupid=0, jobs=1): err= 0: pid=853317: Tue Dec 10 05:08:42 2024 00:31:51.580 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:31:51.580 slat (nsec): min=1234, max=14432k, avg=101055.68, stdev=797986.76 00:31:51.580 clat (usec): min=5800, max=68828, avg=14688.04, stdev=8110.70 00:31:51.580 lat (usec): min=5807, max=68834, avg=14789.10, stdev=8164.07 00:31:51.580 clat percentiles (usec): 00:31:51.580 | 1.00th=[ 6390], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[10290], 00:31:51.580 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11863], 60.00th=[13042], 00:31:51.580 | 70.00th=[14484], 80.00th=[17433], 90.00th=[22152], 95.00th=[31589], 00:31:51.580 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:31:51.580 | 99.99th=[68682] 00:31:51.580 write: IOPS=4662, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1007msec); 0 zone resets 00:31:51.580 slat (nsec): min=1885, max=20623k, avg=102946.35, stdev=838619.09 00:31:51.580 clat (usec): min=1128, max=55118, avg=12765.52, stdev=5899.63 00:31:51.580 lat (usec): min=1138, max=55126, avg=12868.47, stdev=5994.30 00:31:51.580 clat percentiles (usec): 00:31:51.580 | 1.00th=[ 3949], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 8717], 00:31:51.580 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11469], 00:31:51.580 | 70.00th=[12518], 80.00th=[16188], 90.00th=[19268], 95.00th=[22938], 00:31:51.580 | 99.00th=[36963], 99.50th=[36963], 99.90th=[46924], 99.95th=[55313], 00:31:51.580 | 99.99th=[55313] 00:31:51.580 bw ( KiB/s): min=16207, max=20624, per=25.80%, avg=18415.50, stdev=3123.29, samples=2 00:31:51.580 iops : min= 4051, max= 5156, avg=4603.50, stdev=781.35, samples=2 00:31:51.580 lat (msec) : 2=0.20%, 4=0.34%, 10=21.89%, 20=65.92%, 50=10.58% 00:31:51.580 lat (msec) : 100=1.06% 00:31:51.580 cpu : usr=3.18%, sys=5.77%, ctx=337, majf=0, minf=1 00:31:51.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:51.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.580 issued rwts: total=4608,4695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.580 00:31:51.580 Run status group 0 (all jobs): 00:31:51.580 READ: bw=67.0MiB/s (70.3MB/s), 9.93MiB/s-27.3MiB/s (10.4MB/s-28.6MB/s), io=67.5MiB (70.8MB), run=1007-1007msec 00:31:51.580 WRITE: bw=69.7MiB/s (73.1MB/s), 10.9MiB/s-27.8MiB/s (11.4MB/s-29.2MB/s), io=70.2MiB (73.6MB), run=1007-1007msec 00:31:51.580 00:31:51.580 Disk stats (read/write): 00:31:51.580 nvme0n1: ios=2597/2843, merge=0/0, ticks=26517/25761, in_queue=52278, util=96.29% 00:31:51.580 nvme0n2: ios=6113/6144, merge=0/0, ticks=52957/50708, in_queue=103665, util=87.91% 00:31:51.580 nvme0n3: ios=2105/2405, merge=0/0, ticks=17272/20791, in_queue=38063, util=95.22% 00:31:51.580 nvme0n4: ios=3612/4022, merge=0/0, ticks=42859/41622, in_queue=84481, util=98.22% 00:31:51.580 05:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:51.580 05:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=853542 00:31:51.580 05:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:51.580 05:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:51.580 [global] 00:31:51.580 thread=1 00:31:51.580 invalidate=1 00:31:51.580 rw=read 00:31:51.580 time_based=1 00:31:51.580 runtime=10 00:31:51.580 ioengine=libaio 00:31:51.580 direct=1 00:31:51.580 bs=4096 00:31:51.580 iodepth=1 00:31:51.580 norandommap=1 00:31:51.580 numjobs=1 00:31:51.580 00:31:51.580 [job0] 00:31:51.580 filename=/dev/nvme0n1 00:31:51.580 [job1] 00:31:51.580 filename=/dev/nvme0n2 00:31:51.580 [job2] 00:31:51.580 filename=/dev/nvme0n3 00:31:51.580 [job3] 00:31:51.580 filename=/dev/nvme0n4 00:31:51.580 Could not set queue depth (nvme0n1) 00:31:51.580 Could not set queue depth (nvme0n2) 00:31:51.580 Could not set queue depth (nvme0n3) 00:31:51.580 Could not set queue depth (nvme0n4) 00:31:51.580 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.580 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.580 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.580 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.580 fio-3.35 00:31:51.580 Starting 4 threads 00:31:54.853 05:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:54.853 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40951808, buflen=4096 00:31:54.853 fio: pid=853687, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:54.853 05:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:54.853 05:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:54.853 05:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:54.853 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=843776, buflen=4096 00:31:54.853 fio: pid=853686, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.109 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49385472, buflen=4096 00:31:55.109 fio: pid=853684, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.109 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.109 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:55.109 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53018624, buflen=4096 00:31:55.109 fio: pid=853685, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.109 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.109 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:55.366 00:31:55.366 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=853684: Tue Dec 10 05:08:46 2024 00:31:55.366 read: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(47.1MiB/3153msec) 00:31:55.366 slat (usec): min=6, max=9661, avg=10.27, stdev=151.10 00:31:55.366 clat (usec): min=165, max=609, avg=248.22, stdev=37.77 00:31:55.366 lat (usec): min=173, max=10008, avg=258.49, stdev=157.78 00:31:55.366 clat percentiles (usec): 00:31:55.366 | 1.00th=[ 176], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:31:55.366 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 251], 00:31:55.366 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 322], 00:31:55.366 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 416], 00:31:55.366 | 99.99th=[ 465] 00:31:55.366 bw ( KiB/s): min=15048, max=15973, per=36.84%, avg=15375.50, stdev=362.14, samples=6 00:31:55.366 iops : min= 3762, max= 3993, avg=3843.83, stdev=90.45, samples=6 00:31:55.366 lat (usec) : 250=59.34%, 500=40.65%, 750=0.01% 00:31:55.366 cpu : usr=0.92%, sys=3.68%, ctx=12064, majf=0, minf=1 00:31:55.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 issued rwts: total=12058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.366 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=853685: Tue Dec 10 05:08:46 2024 00:31:55.366 read: IOPS=3836, BW=15.0MiB/s (15.7MB/s)(50.6MiB/3374msec) 00:31:55.366 slat (usec): min=6, max=26404, avg=13.21, stdev=306.53 00:31:55.366 clat (usec): min=164, max=544, avg=243.64, stdev=31.99 00:31:55.366 lat (usec): min=172, max=26830, avg=256.85, stdev=310.07 00:31:55.366 clat percentiles (usec): 00:31:55.366 | 1.00th=[ 178], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 217], 00:31:55.366 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:31:55.366 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 310], 00:31:55.366 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 416], 99.95th=[ 494], 00:31:55.366 | 99.99th=[ 506] 00:31:55.366 bw ( KiB/s): min=14952, max=16208, per=37.00%, avg=15442.67, stdev=535.24, samples=6 00:31:55.366 iops : min= 3738, max= 4052, avg=3860.67, stdev=133.81, samples=6 00:31:55.366 lat (usec) : 250=65.32%, 500=34.64%, 750=0.03% 00:31:55.366 cpu : usr=2.61%, sys=5.87%, ctx=12950, majf=0, minf=2 00:31:55.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 issued rwts: total=12945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.366 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=853686: Tue Dec 10 05:08:46 2024 00:31:55.366 read: IOPS=70, BW=280KiB/s (287kB/s)(824KiB/2945msec) 00:31:55.366 slat (nsec): min=7235, max=33876, avg=13800.02, stdev=7039.59 00:31:55.366 clat (usec): min=230, max=41768, avg=14176.31, stdev=19245.32 00:31:55.366 lat (usec): min=240, max=41777, avg=14190.06, stdev=19246.60 00:31:55.366 clat percentiles (usec): 00:31:55.366 | 1.00th=[ 233], 5.00th=[ 265], 10.00th=[ 285], 20.00th=[ 343], 00:31:55.366 | 30.00th=[ 392], 40.00th=[ 433], 50.00th=[ 494], 60.00th=[ 519], 00:31:55.366 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:55.366 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:55.366 | 99.99th=[41681] 00:31:55.366 bw ( KiB/s): min= 168, max= 512, per=0.72%, avg=300.80, stdev=141.15, samples=5 00:31:55.366 iops : min= 42, max= 128, avg=75.20, stdev=35.29, samples=5 00:31:55.366 lat (usec) : 250=3.86%, 500=48.79%, 750=12.56% 00:31:55.366 lat (msec) : 2=0.48%, 50=33.82% 00:31:55.366 cpu : usr=0.24%, sys=0.00%, ctx=207, majf=0, minf=2 00:31:55.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.366 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=853687: Tue Dec 10 05:08:46 2024 00:31:55.366 read: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(39.1MiB/2713msec) 00:31:55.366 slat (nsec): min=8091, max=55016, avg=9460.08, stdev=1438.20 00:31:55.366 clat (usec): min=187, max=443, avg=257.72, stdev=37.16 00:31:55.366 lat (usec): min=196, max=475, avg=267.18, stdev=37.20 00:31:55.366 clat percentiles (usec): 00:31:55.366 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 225], 00:31:55.366 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 258], 00:31:55.366 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 334], 00:31:55.366 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 383], 99.95th=[ 424], 00:31:55.366 | 99.99th=[ 445] 00:31:55.366 bw ( KiB/s): min=14240, max=15536, per=35.53%, avg=14828.80, stdev=600.84, samples=5 00:31:55.366 iops : min= 3560, max= 3884, avg=3707.20, stdev=150.21, samples=5 00:31:55.366 lat (usec) : 250=46.87%, 500=53.12% 00:31:55.366 cpu : usr=2.18%, sys=6.64%, ctx=10002, majf=0, minf=1 00:31:55.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.366 issued rwts: total=9999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.366 00:31:55.366 Run status group 0 (all jobs): 00:31:55.366 READ: bw=40.8MiB/s (42.7MB/s), 280KiB/s-15.0MiB/s (287kB/s-15.7MB/s), io=138MiB (144MB), run=2713-3374msec 00:31:55.366 00:31:55.366 Disk stats (read/write): 00:31:55.366 nvme0n1: ios=11955/0, merge=0/0, ticks=3142/0, in_queue=3142, util=98.55% 00:31:55.366 nvme0n2: ios=12970/0, merge=0/0, ticks=3527/0, in_queue=3527, util=98.22% 00:31:55.366 nvme0n3: ios=204/0, merge=0/0, ticks=2841/0, in_queue=2841, util=96.49% 00:31:55.366 nvme0n4: ios=9699/0, merge=0/0, ticks=2909/0, in_queue=2909, util=99.63% 00:31:55.366 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.366 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:55.623 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.623 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:55.879 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.879 05:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:56.136 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:56.136 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:56.136 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:56.136 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 853542 00:31:56.136 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:56.136 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:56.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:56.392 nvmf hotplug test: fio failed as expected 00:31:56.392 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:56.650 rmmod nvme_tcp 00:31:56.650 rmmod nvme_fabrics 00:31:56.650 rmmod nvme_keyring 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 850925 ']' 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 850925 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 850925 ']' 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 850925 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850925 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850925' 00:31:56.650 killing process with pid 850925 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 850925 00:31:56.650 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 850925 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.909 05:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.443 00:31:59.443 real 0m26.051s 00:31:59.443 user 1m32.541s 00:31:59.443 sys 0m11.625s 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.443 ************************************ 00:31:59.443 END TEST nvmf_fio_target 00:31:59.443 ************************************ 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.443 05:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.443 ************************************ 00:31:59.443 START TEST nvmf_bdevio 00:31:59.443 ************************************ 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:59.443 * Looking for test storage... 00:31:59.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:59.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.443 --rc genhtml_branch_coverage=1 00:31:59.443 --rc genhtml_function_coverage=1 00:31:59.443 --rc genhtml_legend=1 00:31:59.443 --rc geninfo_all_blocks=1 00:31:59.443 --rc geninfo_unexecuted_blocks=1 00:31:59.443 00:31:59.443 ' 00:31:59.443 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:59.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.443 --rc genhtml_branch_coverage=1 00:31:59.444 --rc genhtml_function_coverage=1 00:31:59.444 --rc genhtml_legend=1 00:31:59.444 --rc geninfo_all_blocks=1 00:31:59.444 --rc geninfo_unexecuted_blocks=1 00:31:59.444 00:31:59.444 ' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:59.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.444 --rc genhtml_branch_coverage=1 00:31:59.444 --rc genhtml_function_coverage=1 00:31:59.444 --rc genhtml_legend=1 00:31:59.444 --rc geninfo_all_blocks=1 00:31:59.444 --rc geninfo_unexecuted_blocks=1 00:31:59.444 00:31:59.444 ' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:59.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.444 --rc genhtml_branch_coverage=1 00:31:59.444 --rc genhtml_function_coverage=1 00:31:59.444 --rc genhtml_legend=1 00:31:59.444 --rc geninfo_all_blocks=1 00:31:59.444 --rc geninfo_unexecuted_blocks=1 00:31:59.444 00:31:59.444 ' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.444 05:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:04.714 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:04.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:04.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:04.715 Found net devices under 0000:af:00.0: cvl_0_0 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:04.715 Found net devices under 0000:af:00.1: cvl_0_1 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.715 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.975 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.975 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.975 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:04.975 05:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:04.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:32:04.975 00:32:04.975 --- 10.0.0.2 ping statistics --- 00:32:04.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.975 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:32:04.975 00:32:04.975 --- 10.0.0.1 ping statistics --- 00:32:04.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.975 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=857984 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 857984 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 857984 ']' 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.975 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.234 [2024-12-10 05:08:56.129281] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:05.234 [2024-12-10 05:08:56.130172] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:05.234 [2024-12-10 05:08:56.130205] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.234 [2024-12-10 05:08:56.208438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:05.234 [2024-12-10 05:08:56.248850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.234 [2024-12-10 05:08:56.248887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.234 [2024-12-10 05:08:56.248894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.234 [2024-12-10 05:08:56.248900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.234 [2024-12-10 05:08:56.248908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.234 [2024-12-10 05:08:56.250385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:05.234 [2024-12-10 05:08:56.250498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:05.234 [2024-12-10 05:08:56.250604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.234 [2024-12-10 05:08:56.250605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:05.234 [2024-12-10 05:08:56.317074] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:05.234 [2024-12-10 05:08:56.317645] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:05.234 [2024-12-10 05:08:56.317826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:05.234 [2024-12-10 05:08:56.318013] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:05.234 [2024-12-10 05:08:56.318086] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:05.234 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.234 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:05.234 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.234 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.234 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.493 [2024-12-10 05:08:56.383271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.493 Malloc0 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.493 [2024-12-10 05:08:56.467505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:05.493 { 00:32:05.493 "params": { 00:32:05.493 "name": "Nvme$subsystem", 00:32:05.493 "trtype": "$TEST_TRANSPORT", 00:32:05.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.493 "adrfam": "ipv4", 00:32:05.493 "trsvcid": "$NVMF_PORT", 00:32:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.493 "hdgst": ${hdgst:-false}, 00:32:05.493 "ddgst": ${ddgst:-false} 00:32:05.493 }, 00:32:05.493 "method": "bdev_nvme_attach_controller" 00:32:05.493 } 00:32:05.493 EOF 00:32:05.493 )") 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:05.493 05:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:05.493 "params": { 00:32:05.493 "name": "Nvme1", 00:32:05.493 "trtype": "tcp", 00:32:05.493 "traddr": "10.0.0.2", 00:32:05.493 "adrfam": "ipv4", 00:32:05.493 "trsvcid": "4420", 00:32:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:05.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:05.493 "hdgst": false, 00:32:05.493 "ddgst": false 00:32:05.493 }, 00:32:05.493 "method": "bdev_nvme_attach_controller" 00:32:05.493 }' 00:32:05.493 [2024-12-10 05:08:56.517125] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:05.493 [2024-12-10 05:08:56.517176] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858078 ] 00:32:05.493 [2024-12-10 05:08:56.593316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:05.751 [2024-12-10 05:08:56.635726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.751 [2024-12-10 05:08:56.635837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.751 [2024-12-10 05:08:56.635839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.008 I/O targets: 00:32:06.008 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:06.008 00:32:06.008 00:32:06.008 CUnit - A unit testing framework for C - Version 2.1-3 00:32:06.008 http://cunit.sourceforge.net/ 00:32:06.008 00:32:06.008 00:32:06.008 Suite: bdevio tests on: Nvme1n1 00:32:06.008 Test: blockdev write read block ...passed 00:32:06.008 Test: blockdev write zeroes read block ...passed 00:32:06.008 Test: blockdev write zeroes read no split ...passed 00:32:06.008 Test: blockdev write zeroes read split ...passed 00:32:06.008 Test: blockdev write zeroes read split partial ...passed 00:32:06.008 Test: blockdev reset ...[2024-12-10 05:08:57.104703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:06.008 [2024-12-10 05:08:57.104761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18034f0 (9): Bad file descriptor 00:32:06.265 [2024-12-10 05:08:57.149182] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:06.265 passed 00:32:06.265 Test: blockdev write read 8 blocks ...passed 00:32:06.265 Test: blockdev write read size > 128k ...passed 00:32:06.265 Test: blockdev write read invalid size ...passed 00:32:06.265 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:06.265 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:06.265 Test: blockdev write read max offset ...passed 00:32:06.265 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:06.265 Test: blockdev writev readv 8 blocks ...passed 00:32:06.265 Test: blockdev writev readv 30 x 1block ...passed 00:32:06.265 Test: blockdev writev readv block ...passed 00:32:06.265 Test: blockdev writev readv size > 128k ...passed 00:32:06.265 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:06.265 Test: blockdev comparev and writev ...[2024-12-10 05:08:57.359033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.359985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:06.265 [2024-12-10 05:08:57.359997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.265 [2024-12-10 05:08:57.360004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:06.522 passed 00:32:06.522 Test: blockdev nvme passthru rw ...passed 00:32:06.522 Test: blockdev nvme passthru vendor specific ...[2024-12-10 05:08:57.441532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.522 [2024-12-10 05:08:57.441556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:06.522 [2024-12-10 05:08:57.441666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.522 [2024-12-10 05:08:57.441676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:06.522 [2024-12-10 05:08:57.441783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.522 [2024-12-10 05:08:57.441797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:06.522 [2024-12-10 05:08:57.441902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.522 [2024-12-10 05:08:57.441912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:06.522 passed 00:32:06.522 Test: blockdev nvme admin passthru ...passed 00:32:06.522 Test: blockdev copy ...passed 00:32:06.522 00:32:06.522 Run Summary: Type Total Ran Passed Failed Inactive 00:32:06.522 suites 1 1 n/a 0 0 00:32:06.522 tests 23 23 23 0 0 00:32:06.522 asserts 152 152 152 0 n/a 00:32:06.522 00:32:06.522 Elapsed time = 1.092 seconds 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:06.522 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:06.523 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:06.523 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.523 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:06.523 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.523 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.523 rmmod nvme_tcp 00:32:06.781 rmmod nvme_fabrics 00:32:06.781 rmmod nvme_keyring 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 857984 ']' 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 857984 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 857984 ']' 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 857984 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 857984 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 857984' 00:32:06.781 killing process with pid 857984 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 857984 00:32:06.781 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 857984 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.040 05:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.943 05:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:08.943 00:32:08.943 real 0m9.997s 00:32:08.943 user 0m9.481s 00:32:08.943 sys 0m5.149s 00:32:08.943 05:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.943 05:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.943 ************************************ 00:32:08.943 END TEST nvmf_bdevio 00:32:08.943 ************************************ 00:32:08.943 05:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:08.943 00:32:08.943 real 4m31.280s 00:32:08.943 user 9m7.500s 00:32:08.943 sys 1m50.812s 00:32:08.943 05:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.943 05:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:08.943 ************************************ 00:32:08.943 END TEST nvmf_target_core_interrupt_mode 00:32:08.943 ************************************ 00:32:09.202 05:09:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:09.202 05:09:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.202 05:09:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.202 05:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.202 ************************************ 00:32:09.202 START TEST nvmf_interrupt 00:32:09.202 ************************************ 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:09.202 * Looking for test storage... 00:32:09.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.202 --rc genhtml_branch_coverage=1 00:32:09.202 --rc genhtml_function_coverage=1 00:32:09.202 --rc genhtml_legend=1 00:32:09.202 --rc geninfo_all_blocks=1 00:32:09.202 --rc geninfo_unexecuted_blocks=1 00:32:09.202 00:32:09.202 ' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.202 --rc genhtml_branch_coverage=1 00:32:09.202 --rc genhtml_function_coverage=1 00:32:09.202 --rc genhtml_legend=1 00:32:09.202 --rc geninfo_all_blocks=1 00:32:09.202 --rc geninfo_unexecuted_blocks=1 00:32:09.202 00:32:09.202 ' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.202 --rc genhtml_branch_coverage=1 00:32:09.202 --rc genhtml_function_coverage=1 00:32:09.202 --rc genhtml_legend=1 00:32:09.202 --rc geninfo_all_blocks=1 00:32:09.202 --rc geninfo_unexecuted_blocks=1 00:32:09.202 00:32:09.202 ' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.202 --rc genhtml_branch_coverage=1 00:32:09.202 --rc genhtml_function_coverage=1 00:32:09.202 --rc genhtml_legend=1 00:32:09.202 --rc geninfo_all_blocks=1 00:32:09.202 --rc geninfo_unexecuted_blocks=1 00:32:09.202 00:32:09.202 ' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.202 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.203 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.462 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:09.462 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:09.462 05:09:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.462 05:09:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:14.734 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:14.735 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:14.735 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:14.735 Found net devices under 0000:af:00.0: cvl_0_0 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:14.735 Found net devices under 0000:af:00.1: cvl_0_1 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.735 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.994 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.994 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.994 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:14.994 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.994 05:09:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.994 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.994 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:14.994 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:14.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:32:14.995 00:32:14.995 --- 10.0.0.2 ping statistics --- 00:32:14.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.995 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:32:14.995 00:32:14.995 --- 10.0.0.1 ping statistics --- 00:32:14.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.995 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=861870 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 861870 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 861870 ']' 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.995 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:14.995 [2024-12-10 05:09:06.121714] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:14.995 [2024-12-10 05:09:06.122743] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:14.995 [2024-12-10 05:09:06.122783] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.254 [2024-12-10 05:09:06.202286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:15.254 [2024-12-10 05:09:06.242695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.254 [2024-12-10 05:09:06.242731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.254 [2024-12-10 05:09:06.242739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.254 [2024-12-10 05:09:06.242745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.254 [2024-12-10 05:09:06.242751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.254 [2024-12-10 05:09:06.243805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.254 [2024-12-10 05:09:06.243805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.254 [2024-12-10 05:09:06.311378] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:15.254 [2024-12-10 05:09:06.311856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:15.254 [2024-12-10 05:09:06.312067] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:15.254 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:15.513 5000+0 records in 00:32:15.513 5000+0 records out 00:32:15.513 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179225 s, 571 MB/s 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.513 AIO0 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.513 [2024-12-10 05:09:06.440689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:15.513 [2024-12-10 05:09:06.480937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 861870 0 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 861870 0 idle 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:15.513 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861870 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861870 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 861870 1 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 861870 1 idle 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861953 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861953 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=862190 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 861870 0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 861870 0 busy 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:15.772 05:09:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861870 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.43 reactor_0' 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861870 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.43 reactor_0 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 861870 1 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 861870 1 busy 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:16.031 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861953 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.27 reactor_1' 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861953 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.27 reactor_1 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.289 05:09:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 862190 00:32:26.250 Initializing NVMe Controllers 00:32:26.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.250 Controller IO queue size 256, less than required. 00:32:26.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:26.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:26.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:26.250 Initialization complete. Launching workers. 00:32:26.250 ======================================================== 00:32:26.250 Latency(us) 00:32:26.250 Device Information : IOPS MiB/s Average min max 00:32:26.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16557.53 64.68 15469.73 3491.01 30959.36 00:32:26.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16361.73 63.91 15652.94 7382.96 57420.54 00:32:26.250 ======================================================== 00:32:26.250 Total : 32919.26 128.59 15560.79 3491.01 57420.54 00:32:26.250 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 861870 0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 861870 0 idle 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861870 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861870 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 861870 1 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 861870 1 idle 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:26.250 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861953 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.01 reactor_1' 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861953 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.01 reactor_1 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:26.510 05:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:26.769 05:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:26.769 05:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:26.769 05:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:26.769 05:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:26.769 05:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 861870 0 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 861870 0 idle 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:29.305 05:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:29.305 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861870 root 20 0 128.2g 73728 34560 S 6.7 0.1 0:20.51 reactor_0' 00:32:29.305 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861870 root 20 0 128.2g 73728 34560 S 6.7 0.1 0:20.51 reactor_0 00:32:29.305 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.305 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 861870 1 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 861870 1 idle 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=861870 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 861870 -w 256 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 861953 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.11 reactor_1' 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 861953 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.11 reactor_1 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:29.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.306 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.565 rmmod nvme_tcp 00:32:29.565 rmmod nvme_fabrics 00:32:29.565 rmmod nvme_keyring 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 861870 ']' 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 861870 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 861870 ']' 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 861870 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 861870 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 861870' 00:32:29.565 killing process with pid 861870 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 861870 00:32:29.565 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 861870 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.824 05:09:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.730 05:09:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.730 00:32:31.730 real 0m22.734s 00:32:31.730 user 0m39.618s 00:32:31.730 sys 0m8.386s 00:32:31.730 05:09:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.730 05:09:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.730 ************************************ 00:32:31.730 END TEST nvmf_interrupt 00:32:31.730 ************************************ 00:32:31.989 00:32:31.989 real 27m23.105s 00:32:31.989 user 56m33.751s 00:32:31.989 sys 9m15.666s 00:32:31.989 05:09:22 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.989 05:09:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:31.989 ************************************ 00:32:31.989 END TEST nvmf_tcp 00:32:31.989 ************************************ 00:32:31.989 05:09:22 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:31.989 05:09:22 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:31.989 05:09:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:31.989 05:09:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.989 05:09:22 -- common/autotest_common.sh@10 -- # set +x 00:32:31.989 ************************************ 00:32:31.989 START TEST spdkcli_nvmf_tcp 00:32:31.989 ************************************ 00:32:31.989 05:09:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:31.989 * Looking for test storage... 00:32:31.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:31.989 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:31.989 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:31.989 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.249 --rc genhtml_branch_coverage=1 00:32:32.249 --rc genhtml_function_coverage=1 00:32:32.249 --rc genhtml_legend=1 00:32:32.249 --rc geninfo_all_blocks=1 00:32:32.249 --rc geninfo_unexecuted_blocks=1 00:32:32.249 00:32:32.249 ' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.249 --rc genhtml_branch_coverage=1 00:32:32.249 --rc genhtml_function_coverage=1 00:32:32.249 --rc genhtml_legend=1 00:32:32.249 --rc geninfo_all_blocks=1 00:32:32.249 --rc geninfo_unexecuted_blocks=1 00:32:32.249 00:32:32.249 ' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.249 --rc genhtml_branch_coverage=1 00:32:32.249 --rc genhtml_function_coverage=1 00:32:32.249 --rc genhtml_legend=1 00:32:32.249 --rc geninfo_all_blocks=1 00:32:32.249 --rc geninfo_unexecuted_blocks=1 00:32:32.249 00:32:32.249 ' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.249 --rc genhtml_branch_coverage=1 00:32:32.249 --rc genhtml_function_coverage=1 00:32:32.249 --rc genhtml_legend=1 00:32:32.249 --rc geninfo_all_blocks=1 00:32:32.249 --rc geninfo_unexecuted_blocks=1 00:32:32.249 00:32:32.249 ' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:32.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.249 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=865067 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 865067 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 865067 ']' 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.250 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.250 [2024-12-10 05:09:23.232552] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:32:32.250 [2024-12-10 05:09:23.232602] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865067 ] 00:32:32.250 [2024-12-10 05:09:23.306755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:32.250 [2024-12-10 05:09:23.348179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.250 [2024-12-10 05:09:23.348182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.508 05:09:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:32.508 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:32.508 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:32.508 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:32.508 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:32.508 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:32.508 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:32.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:32.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:32.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:32.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:32.508 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:32.509 ' 00:32:35.040 [2024-12-10 05:09:26.166715] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.461 [2024-12-10 05:09:27.499109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:38.995 [2024-12-10 05:09:29.986763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:41.026 [2024-12-10 05:09:32.133578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:42.928 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:42.928 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:42.928 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:42.928 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:42.928 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:42.928 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:42.928 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:42.928 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:42.928 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:42.928 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:42.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:42.928 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:42.928 05:09:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.496 05:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:43.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:43.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:43.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:43.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:43.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:43.496 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:43.496 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:43.496 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:43.496 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:43.496 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:43.496 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:43.496 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:43.496 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:43.496 ' 00:32:50.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:50.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:50.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:50.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:50.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:50.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:50.069 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:50.069 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:50.069 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:50.069 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:50.069 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:50.069 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:50.069 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:50.069 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 865067 ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865067' 00:32:50.069 killing process with pid 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 865067 ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 865067 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 865067 ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 865067 00:32:50.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (865067) - No such process 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 865067 is not found' 00:32:50.069 Process with pid 865067 is not found 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:50.069 00:32:50.069 real 0m17.319s 00:32:50.069 user 0m38.164s 00:32:50.069 sys 0m0.799s 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.069 05:09:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.069 ************************************ 00:32:50.069 END TEST spdkcli_nvmf_tcp 00:32:50.069 ************************************ 00:32:50.069 05:09:40 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:50.069 05:09:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.069 05:09:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.069 05:09:40 -- common/autotest_common.sh@10 -- # set +x 00:32:50.069 ************************************ 00:32:50.069 START TEST nvmf_identify_passthru 00:32:50.069 ************************************ 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:50.069 * Looking for test storage... 00:32:50.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.069 05:09:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.069 --rc genhtml_branch_coverage=1 00:32:50.069 --rc genhtml_function_coverage=1 00:32:50.069 --rc genhtml_legend=1 00:32:50.069 --rc geninfo_all_blocks=1 00:32:50.069 --rc geninfo_unexecuted_blocks=1 00:32:50.069 00:32:50.069 ' 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.069 --rc genhtml_branch_coverage=1 00:32:50.069 --rc genhtml_function_coverage=1 00:32:50.069 --rc genhtml_legend=1 00:32:50.069 --rc geninfo_all_blocks=1 00:32:50.069 --rc geninfo_unexecuted_blocks=1 00:32:50.069 00:32:50.069 ' 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.069 --rc genhtml_branch_coverage=1 00:32:50.069 --rc genhtml_function_coverage=1 00:32:50.069 --rc genhtml_legend=1 00:32:50.069 --rc geninfo_all_blocks=1 00:32:50.069 --rc geninfo_unexecuted_blocks=1 00:32:50.069 00:32:50.069 ' 00:32:50.069 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.069 --rc genhtml_branch_coverage=1 00:32:50.069 --rc genhtml_function_coverage=1 00:32:50.069 --rc genhtml_legend=1 00:32:50.069 --rc geninfo_all_blocks=1 00:32:50.069 --rc geninfo_unexecuted_blocks=1 00:32:50.069 00:32:50.069 ' 00:32:50.069 05:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.069 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:50.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.070 05:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.070 05:09:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:50.070 05:09:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.070 05:09:40 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.070 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:50.070 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.070 05:09:40 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.070 05:09:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:55.345 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:55.345 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:55.345 Found net devices under 0000:af:00.0: cvl_0_0 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:55.345 Found net devices under 0000:af:00.1: cvl_0_1 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:32:55.345 00:32:55.345 --- 10.0.0.2 ping statistics --- 00:32:55.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.345 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:32:55.345 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:32:55.346 00:32:55.346 --- 10.0.0.1 ping statistics --- 00:32:55.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.346 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:55.346 05:09:46 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.346 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:55.346 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:55.605 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:55.605 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:55.605 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:55.605 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:55.605 05:09:46 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:55.605 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:55.605 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:55.605 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:55.605 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:55.605 05:09:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:59.796 05:09:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:32:59.796 05:09:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:59.796 05:09:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:59.796 05:09:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=872275 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:03.987 05:09:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 872275 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 872275 ']' 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.987 05:09:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.987 [2024-12-10 05:09:54.953418] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:33:03.987 [2024-12-10 05:09:54.953469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.987 [2024-12-10 05:09:55.030079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:03.987 [2024-12-10 05:09:55.071248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.987 [2024-12-10 05:09:55.071286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.987 [2024-12-10 05:09:55.071293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.987 [2024-12-10 05:09:55.071299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.987 [2024-12-10 05:09:55.071304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.987 [2024-12-10 05:09:55.072607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.987 [2024-12-10 05:09:55.072709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.987 [2024-12-10 05:09:55.072734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:03.987 [2024-12-10 05:09:55.072736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:03.987 05:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.987 INFO: Log level set to 20 00:33:03.987 INFO: Requests: 00:33:03.987 { 00:33:03.987 "jsonrpc": "2.0", 00:33:03.987 "method": "nvmf_set_config", 00:33:03.987 "id": 1, 00:33:03.987 "params": { 00:33:03.987 "admin_cmd_passthru": { 00:33:03.987 "identify_ctrlr": true 00:33:03.987 } 00:33:03.987 } 00:33:03.987 } 00:33:03.987 00:33:03.987 INFO: response: 00:33:03.987 { 00:33:03.987 "jsonrpc": "2.0", 00:33:03.987 "id": 1, 00:33:03.987 "result": true 00:33:03.987 } 00:33:03.987 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.987 05:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.987 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:03.987 INFO: Setting log level to 20 00:33:03.987 INFO: Setting log level to 20 00:33:03.987 INFO: Log level set to 20 00:33:03.987 INFO: Log level set to 20 00:33:03.987 INFO: Requests: 00:33:03.987 { 00:33:03.987 "jsonrpc": "2.0", 00:33:03.987 "method": "framework_start_init", 00:33:03.987 "id": 1 00:33:03.987 } 00:33:03.987 00:33:03.987 INFO: Requests: 00:33:03.987 { 00:33:03.987 "jsonrpc": "2.0", 00:33:03.987 "method": "framework_start_init", 00:33:03.987 "id": 1 00:33:03.987 } 00:33:03.987 00:33:04.246 [2024-12-10 05:09:55.180003] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:04.246 INFO: response: 00:33:04.246 { 00:33:04.246 "jsonrpc": "2.0", 00:33:04.246 "id": 1, 00:33:04.246 "result": true 00:33:04.246 } 00:33:04.246 00:33:04.246 INFO: response: 00:33:04.246 { 00:33:04.246 "jsonrpc": "2.0", 00:33:04.246 "id": 1, 00:33:04.246 "result": true 00:33:04.246 } 00:33:04.246 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.246 05:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:04.246 INFO: Setting log level to 40 00:33:04.246 INFO: Setting log level to 40 00:33:04.246 INFO: Setting log level to 40 00:33:04.246 [2024-12-10 05:09:55.189267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.246 05:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:04.246 05:09:55 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.246 05:09:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.531 Nvme0n1 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.531 [2024-12-10 05:09:58.093270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.531 [ 00:33:07.531 { 00:33:07.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:07.531 "subtype": "Discovery", 00:33:07.531 "listen_addresses": [], 00:33:07.531 "allow_any_host": true, 00:33:07.531 "hosts": [] 00:33:07.531 }, 00:33:07.531 { 00:33:07.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.531 "subtype": "NVMe", 00:33:07.531 "listen_addresses": [ 00:33:07.531 { 00:33:07.531 "trtype": "TCP", 00:33:07.531 "adrfam": "IPv4", 00:33:07.531 "traddr": "10.0.0.2", 00:33:07.531 "trsvcid": "4420" 00:33:07.531 } 00:33:07.531 ], 00:33:07.531 "allow_any_host": true, 00:33:07.531 "hosts": [], 00:33:07.531 "serial_number": "SPDK00000000000001", 00:33:07.531 "model_number": "SPDK bdev Controller", 00:33:07.531 "max_namespaces": 1, 00:33:07.531 "min_cntlid": 1, 00:33:07.531 "max_cntlid": 65519, 00:33:07.531 "namespaces": [ 00:33:07.531 { 00:33:07.531 "nsid": 1, 00:33:07.531 "bdev_name": "Nvme0n1", 00:33:07.531 "name": "Nvme0n1", 00:33:07.531 "nguid": "537A34829B1549439A0408A0C91ED729", 00:33:07.531 "uuid": "537a3482-9b15-4943-9a04-08a0c91ed729" 00:33:07.531 } 00:33:07.531 ] 00:33:07.531 } 00:33:07.531 ] 00:33:07.531 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:07.531 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:07.532 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:07.532 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.532 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:07.532 05:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.532 rmmod nvme_tcp 00:33:07.532 rmmod nvme_fabrics 00:33:07.532 rmmod nvme_keyring 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 872275 ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 872275 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 872275 ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 872275 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 872275 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 872275' 00:33:07.532 killing process with pid 872275 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 872275 00:33:07.532 05:09:58 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 872275 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.437 05:10:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.437 05:10:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:09.437 05:10:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.342 05:10:02 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.342 00:33:11.342 real 0m21.822s 00:33:11.342 user 0m26.755s 00:33:11.342 sys 0m6.163s 00:33:11.342 05:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.342 05:10:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:11.342 ************************************ 00:33:11.342 END TEST nvmf_identify_passthru 00:33:11.342 ************************************ 00:33:11.342 05:10:02 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:11.342 05:10:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:11.342 05:10:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.342 05:10:02 -- common/autotest_common.sh@10 -- # set +x 00:33:11.342 ************************************ 00:33:11.342 START TEST nvmf_dif 00:33:11.342 ************************************ 00:33:11.342 05:10:02 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:11.342 * Looking for test storage... 00:33:11.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:11.342 05:10:02 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:11.342 05:10:02 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:11.342 05:10:02 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:11.342 05:10:02 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.342 05:10:02 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.343 --rc genhtml_branch_coverage=1 00:33:11.343 --rc genhtml_function_coverage=1 00:33:11.343 --rc genhtml_legend=1 00:33:11.343 --rc geninfo_all_blocks=1 00:33:11.343 --rc geninfo_unexecuted_blocks=1 00:33:11.343 00:33:11.343 ' 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.343 --rc genhtml_branch_coverage=1 00:33:11.343 --rc genhtml_function_coverage=1 00:33:11.343 --rc genhtml_legend=1 00:33:11.343 --rc geninfo_all_blocks=1 00:33:11.343 --rc geninfo_unexecuted_blocks=1 00:33:11.343 00:33:11.343 ' 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.343 --rc genhtml_branch_coverage=1 00:33:11.343 --rc genhtml_function_coverage=1 00:33:11.343 --rc genhtml_legend=1 00:33:11.343 --rc geninfo_all_blocks=1 00:33:11.343 --rc geninfo_unexecuted_blocks=1 00:33:11.343 00:33:11.343 ' 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:11.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.343 --rc genhtml_branch_coverage=1 00:33:11.343 --rc genhtml_function_coverage=1 00:33:11.343 --rc genhtml_legend=1 00:33:11.343 --rc geninfo_all_blocks=1 00:33:11.343 --rc geninfo_unexecuted_blocks=1 00:33:11.343 00:33:11.343 ' 00:33:11.343 05:10:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.343 05:10:02 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.343 05:10:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.343 05:10:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.343 05:10:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.343 05:10:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:11.343 05:10:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.343 05:10:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:11.343 05:10:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:11.343 05:10:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:11.343 05:10:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:11.343 05:10:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:11.343 05:10:02 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.343 05:10:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.915 05:10:08 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:17.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:17.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:17.916 Found net devices under 0000:af:00.0: cvl_0_0 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:17.916 Found net devices under 0000:af:00.1: cvl_0_1 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:33:17.916 00:33:17.916 --- 10.0.0.2 ping statistics --- 00:33:17.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.916 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:33:17.916 00:33:17.916 --- 10.0.0.1 ping statistics --- 00:33:17.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.916 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:17.916 05:10:08 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:19.820 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:19.820 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:19.820 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:19.820 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:19.820 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:19.820 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:20.078 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:20.078 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:20.079 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.079 05:10:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:20.079 05:10:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=877645 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:20.079 05:10:11 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 877645 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 877645 ']' 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.079 05:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.337 [2024-12-10 05:10:11.218741] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:33:20.337 [2024-12-10 05:10:11.218782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.337 [2024-12-10 05:10:11.296421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.337 [2024-12-10 05:10:11.335203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.337 [2024-12-10 05:10:11.335238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.337 [2024-12-10 05:10:11.335245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.337 [2024-12-10 05:10:11.335251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.337 [2024-12-10 05:10:11.335256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.337 [2024-12-10 05:10:11.335732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.337 05:10:11 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.337 05:10:11 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:20.337 05:10:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.337 05:10:11 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.337 05:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.337 05:10:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.337 05:10:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:20.337 05:10:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:20.337 05:10:11 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.337 05:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.596 [2024-12-10 05:10:11.470211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.596 05:10:11 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.596 05:10:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:20.596 05:10:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:20.596 05:10:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.596 05:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.596 ************************************ 00:33:20.596 START TEST fio_dif_1_default 00:33:20.596 ************************************ 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.596 bdev_null0 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.596 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.597 [2024-12-10 05:10:11.542508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.597 { 00:33:20.597 "params": { 00:33:20.597 "name": "Nvme$subsystem", 00:33:20.597 "trtype": "$TEST_TRANSPORT", 00:33:20.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.597 "adrfam": "ipv4", 00:33:20.597 "trsvcid": "$NVMF_PORT", 00:33:20.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.597 "hdgst": ${hdgst:-false}, 00:33:20.597 "ddgst": ${ddgst:-false} 00:33:20.597 }, 00:33:20.597 "method": "bdev_nvme_attach_controller" 00:33:20.597 } 00:33:20.597 EOF 00:33:20.597 )") 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:20.597 "params": { 00:33:20.597 "name": "Nvme0", 00:33:20.597 "trtype": "tcp", 00:33:20.597 "traddr": "10.0.0.2", 00:33:20.597 "adrfam": "ipv4", 00:33:20.597 "trsvcid": "4420", 00:33:20.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.597 "hdgst": false, 00:33:20.597 "ddgst": false 00:33:20.597 }, 00:33:20.597 "method": "bdev_nvme_attach_controller" 00:33:20.597 }' 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:20.597 05:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.856 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:20.856 fio-3.35 00:33:20.856 Starting 1 thread 00:33:33.142 00:33:33.142 filename0: (groupid=0, jobs=1): err= 0: pid=878009: Tue Dec 10 05:10:22 2024 00:33:33.142 read: IOPS=156, BW=627KiB/s (642kB/s)(6288KiB/10027msec) 00:33:33.142 slat (nsec): min=5804, max=26104, avg=6124.34, stdev=917.51 00:33:33.142 clat (usec): min=375, max=44614, avg=25495.86, stdev=19834.23 00:33:33.142 lat (usec): min=381, max=44640, avg=25501.99, stdev=19834.25 00:33:33.142 clat percentiles (usec): 00:33:33.142 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 465], 20.00th=[ 594], 00:33:33.142 | 30.00th=[ 611], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:33:33.142 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:33.142 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:33:33.142 | 99.99th=[44827] 00:33:33.142 bw ( KiB/s): min= 384, max= 960, per=99.98%, avg=627.20, stdev=244.46, samples=20 00:33:33.142 iops : min= 96, max= 240, avg=156.80, stdev=61.11, samples=20 00:33:33.142 lat (usec) : 500=16.41%, 750=22.26% 00:33:33.142 lat (msec) : 50=61.32% 00:33:33.142 cpu : usr=92.55%, sys=7.21%, ctx=18, majf=0, minf=0 00:33:33.142 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.142 issued rwts: total=1572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.142 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.142 00:33:33.142 Run status group 0 (all jobs): 00:33:33.142 READ: bw=627KiB/s (642kB/s), 627KiB/s-627KiB/s (642kB/s-642kB/s), io=6288KiB (6439kB), run=10027-10027msec 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.142 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 00:33:33.143 real 0m11.102s 00:33:33.143 user 0m16.139s 00:33:33.143 sys 0m1.013s 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 ************************************ 00:33:33.143 END TEST fio_dif_1_default 00:33:33.143 ************************************ 00:33:33.143 05:10:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:33.143 05:10:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:33.143 05:10:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 ************************************ 00:33:33.143 START TEST fio_dif_1_multi_subsystems 00:33:33.143 ************************************ 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 bdev_null0 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 [2024-12-10 05:10:22.711080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 bdev_null1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.143 { 00:33:33.143 "params": { 00:33:33.143 "name": "Nvme$subsystem", 00:33:33.143 "trtype": "$TEST_TRANSPORT", 00:33:33.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.143 "adrfam": "ipv4", 00:33:33.143 "trsvcid": "$NVMF_PORT", 00:33:33.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.143 "hdgst": ${hdgst:-false}, 00:33:33.143 "ddgst": ${ddgst:-false} 00:33:33.143 }, 00:33:33.143 "method": "bdev_nvme_attach_controller" 00:33:33.143 } 00:33:33.143 EOF 00:33:33.143 )") 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.143 { 00:33:33.143 "params": { 00:33:33.143 "name": "Nvme$subsystem", 00:33:33.143 "trtype": "$TEST_TRANSPORT", 00:33:33.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.143 "adrfam": "ipv4", 00:33:33.143 "trsvcid": "$NVMF_PORT", 00:33:33.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.143 "hdgst": ${hdgst:-false}, 00:33:33.143 "ddgst": ${ddgst:-false} 00:33:33.143 }, 00:33:33.143 "method": "bdev_nvme_attach_controller" 00:33:33.143 } 00:33:33.143 EOF 00:33:33.143 )") 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:33.143 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.143 "params": { 00:33:33.143 "name": "Nvme0", 00:33:33.143 "trtype": "tcp", 00:33:33.143 "traddr": "10.0.0.2", 00:33:33.143 "adrfam": "ipv4", 00:33:33.143 "trsvcid": "4420", 00:33:33.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.143 "hdgst": false, 00:33:33.143 "ddgst": false 00:33:33.143 }, 00:33:33.143 "method": "bdev_nvme_attach_controller" 00:33:33.143 },{ 00:33:33.143 "params": { 00:33:33.144 "name": "Nvme1", 00:33:33.144 "trtype": "tcp", 00:33:33.144 "traddr": "10.0.0.2", 00:33:33.144 "adrfam": "ipv4", 00:33:33.144 "trsvcid": "4420", 00:33:33.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:33.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:33.144 "hdgst": false, 00:33:33.144 "ddgst": false 00:33:33.144 }, 00:33:33.144 "method": "bdev_nvme_attach_controller" 00:33:33.144 }' 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:33.144 05:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.144 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:33.144 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:33.144 fio-3.35 00:33:33.144 Starting 2 threads 00:33:43.117 00:33:43.117 filename0: (groupid=0, jobs=1): err= 0: pid=879931: Tue Dec 10 05:10:33 2024 00:33:43.117 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:33:43.117 slat (nsec): min=5906, max=29782, avg=7707.46, stdev=2742.71 00:33:43.117 clat (usec): min=40822, max=41984, avg=41004.17, stdev=164.03 00:33:43.117 lat (usec): min=40828, max=41996, avg=41011.88, stdev=164.30 00:33:43.117 clat percentiles (usec): 00:33:43.117 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:43.117 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:43.117 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:43.117 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:43.117 | 99.99th=[42206] 00:33:43.117 bw ( KiB/s): min= 384, max= 416, per=33.45%, avg=388.80, stdev=11.72, samples=20 00:33:43.117 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:43.117 lat (msec) : 50=100.00% 00:33:43.117 cpu : usr=96.78%, sys=2.98%, ctx=6, majf=0, minf=177 00:33:43.117 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.117 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.117 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:43.117 filename1: (groupid=0, jobs=1): err= 0: pid=879932: Tue Dec 10 05:10:33 2024 00:33:43.117 read: IOPS=192, BW=771KiB/s (789kB/s)(7728KiB/10028msec) 00:33:43.117 slat (nsec): min=5907, max=29046, avg=7097.92, stdev=2133.92 00:33:43.117 clat (usec): min=386, max=42574, avg=20740.83, stdev=20421.63 00:33:43.117 lat (usec): min=392, max=42581, avg=20747.93, stdev=20420.98 00:33:43.117 clat percentiles (usec): 00:33:43.117 | 1.00th=[ 396], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:33:43.117 | 30.00th=[ 424], 40.00th=[ 457], 50.00th=[ 750], 60.00th=[40633], 00:33:43.117 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:43.117 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:43.117 | 99.99th=[42730] 00:33:43.117 bw ( KiB/s): min= 768, max= 832, per=66.47%, avg=771.20, stdev=14.31, samples=20 00:33:43.117 iops : min= 192, max= 208, avg=192.80, stdev= 3.58, samples=20 00:33:43.117 lat (usec) : 500=41.41%, 750=8.59%, 1000=0.10% 00:33:43.117 lat (msec) : 2=0.21%, 50=49.69% 00:33:43.117 cpu : usr=96.58%, sys=3.18%, ctx=9, majf=0, minf=67 00:33:43.117 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.117 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.117 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:43.117 00:33:43.117 Run status group 0 (all jobs): 00:33:43.117 READ: bw=1160KiB/s (1188kB/s), 390KiB/s-771KiB/s (399kB/s-789kB/s), io=11.4MiB (11.9MB), run=10011-10028msec 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.117 00:33:43.117 real 0m11.240s 00:33:43.117 user 0m26.092s 00:33:43.117 sys 0m0.910s 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:43.117 05:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.117 ************************************ 00:33:43.117 END TEST fio_dif_1_multi_subsystems 00:33:43.117 ************************************ 00:33:43.117 05:10:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:43.117 05:10:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:43.118 05:10:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.118 05:10:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 ************************************ 00:33:43.118 START TEST fio_dif_rand_params 00:33:43.118 ************************************ 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 05:10:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 bdev_null0 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 [2024-12-10 05:10:34.024480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.118 { 00:33:43.118 "params": { 00:33:43.118 "name": "Nvme$subsystem", 00:33:43.118 "trtype": "$TEST_TRANSPORT", 00:33:43.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.118 "adrfam": "ipv4", 00:33:43.118 "trsvcid": "$NVMF_PORT", 00:33:43.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.118 "hdgst": ${hdgst:-false}, 00:33:43.118 "ddgst": ${ddgst:-false} 00:33:43.118 }, 00:33:43.118 "method": "bdev_nvme_attach_controller" 00:33:43.118 } 00:33:43.118 EOF 00:33:43.118 )") 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.118 "params": { 00:33:43.118 "name": "Nvme0", 00:33:43.118 "trtype": "tcp", 00:33:43.118 "traddr": "10.0.0.2", 00:33:43.118 "adrfam": "ipv4", 00:33:43.118 "trsvcid": "4420", 00:33:43.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.118 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.118 "hdgst": false, 00:33:43.118 "ddgst": false 00:33:43.118 }, 00:33:43.118 "method": "bdev_nvme_attach_controller" 00:33:43.118 }' 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:43.118 05:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.376 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:43.376 ... 00:33:43.376 fio-3.35 00:33:43.376 Starting 3 threads 00:33:49.940 00:33:49.940 filename0: (groupid=0, jobs=1): err= 0: pid=881841: Tue Dec 10 05:10:39 2024 00:33:49.940 read: IOPS=324, BW=40.6MiB/s (42.5MB/s)(205MiB/5047msec) 00:33:49.940 slat (nsec): min=6229, max=28303, avg=11314.42, stdev=2264.75 00:33:49.940 clat (usec): min=5016, max=51105, avg=9204.14, stdev=3932.28 00:33:49.940 lat (usec): min=5027, max=51117, avg=9215.46, stdev=3932.23 00:33:49.940 clat percentiles (usec): 00:33:49.940 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7898], 00:33:49.940 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:33:49.940 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11076], 00:33:49.940 | 99.00th=[12649], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:33:49.940 | 99.99th=[51119] 00:33:49.940 bw ( KiB/s): min=29952, max=46592, per=35.39%, avg=41856.00, stdev=4538.34, samples=10 00:33:49.940 iops : min= 234, max= 364, avg=327.00, stdev=35.46, samples=10 00:33:49.940 lat (msec) : 10=81.38%, 20=17.77%, 50=0.67%, 100=0.18% 00:33:49.940 cpu : usr=94.49%, sys=5.21%, ctx=8, majf=0, minf=69 00:33:49.940 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.940 issued rwts: total=1638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:49.940 filename0: (groupid=0, jobs=1): err= 0: pid=881842: Tue Dec 10 05:10:39 2024 00:33:49.940 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(197MiB/5045msec) 00:33:49.940 slat (nsec): min=6159, max=27692, avg=11793.05, stdev=2072.67 00:33:49.940 clat (usec): min=5588, max=50333, avg=9543.85, stdev=3105.32 00:33:49.940 lat (usec): min=5601, max=50345, avg=9555.64, stdev=3105.38 00:33:49.940 clat percentiles (usec): 00:33:49.940 | 1.00th=[ 6128], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 8291], 00:33:49.940 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:49.940 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:33:49.940 | 99.00th=[12780], 99.50th=[44827], 99.90th=[49546], 99.95th=[50594], 00:33:49.940 | 99.99th=[50594] 00:33:49.940 bw ( KiB/s): min=37632, max=43520, per=34.13%, avg=40371.20, stdev=2114.65, samples=10 00:33:49.940 iops : min= 294, max= 340, avg=315.40, stdev=16.52, samples=10 00:33:49.940 lat (msec) : 10=66.62%, 20=32.87%, 50=0.44%, 100=0.06% 00:33:49.940 cpu : usr=94.53%, sys=5.15%, ctx=9, majf=0, minf=57 00:33:49.940 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.940 issued rwts: total=1579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:49.940 filename0: (groupid=0, jobs=1): err= 0: pid=881844: Tue Dec 10 05:10:39 2024 00:33:49.940 read: IOPS=286, BW=35.9MiB/s (37.6MB/s)(181MiB/5044msec) 00:33:49.940 slat (nsec): min=6159, max=26395, avg=11288.48, stdev=2074.15 00:33:49.940 clat (usec): min=3392, max=91295, avg=10414.82, stdev=5982.79 00:33:49.940 lat (usec): min=3398, max=91308, avg=10426.11, stdev=5982.79 00:33:49.940 clat percentiles (usec): 00:33:49.940 | 1.00th=[ 5276], 5.00th=[ 6718], 10.00th=[ 8029], 20.00th=[ 8717], 00:33:49.940 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:33:49.940 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:33:49.940 | 99.00th=[50070], 99.50th=[51119], 99.90th=[89654], 99.95th=[91751], 00:33:49.940 | 99.99th=[91751] 00:33:49.940 bw ( KiB/s): min=29952, max=46080, per=31.27%, avg=36992.00, stdev=4533.52, samples=10 00:33:49.940 iops : min= 234, max= 360, avg=289.00, stdev=35.42, samples=10 00:33:49.940 lat (msec) : 4=0.69%, 10=56.12%, 20=41.53%, 50=0.48%, 100=1.17% 00:33:49.940 cpu : usr=94.53%, sys=5.20%, ctx=7, majf=0, minf=21 00:33:49.940 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.940 issued rwts: total=1447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:49.940 00:33:49.940 Run status group 0 (all jobs): 00:33:49.940 READ: bw=116MiB/s (121MB/s), 35.9MiB/s-40.6MiB/s (37.6MB/s-42.5MB/s), io=583MiB (611MB), run=5044-5047msec 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.940 bdev_null0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.940 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 [2024-12-10 05:10:40.154875] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 bdev_null1 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 bdev_null2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.941 { 00:33:49.941 "params": { 00:33:49.941 "name": "Nvme$subsystem", 00:33:49.941 "trtype": "$TEST_TRANSPORT", 00:33:49.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.941 "adrfam": "ipv4", 00:33:49.941 "trsvcid": "$NVMF_PORT", 00:33:49.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.941 "hdgst": ${hdgst:-false}, 00:33:49.941 "ddgst": ${ddgst:-false} 00:33:49.941 }, 00:33:49.941 "method": "bdev_nvme_attach_controller" 00:33:49.941 } 00:33:49.941 EOF 00:33:49.941 )") 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.941 { 00:33:49.941 "params": { 00:33:49.941 "name": "Nvme$subsystem", 00:33:49.941 "trtype": "$TEST_TRANSPORT", 00:33:49.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.941 "adrfam": "ipv4", 00:33:49.941 "trsvcid": "$NVMF_PORT", 00:33:49.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.941 "hdgst": ${hdgst:-false}, 00:33:49.941 "ddgst": ${ddgst:-false} 00:33:49.941 }, 00:33:49.941 "method": "bdev_nvme_attach_controller" 00:33:49.941 } 00:33:49.941 EOF 00:33:49.941 )") 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.941 { 00:33:49.941 "params": { 00:33:49.941 "name": "Nvme$subsystem", 00:33:49.941 "trtype": "$TEST_TRANSPORT", 00:33:49.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.941 "adrfam": "ipv4", 00:33:49.941 "trsvcid": "$NVMF_PORT", 00:33:49.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.941 "hdgst": ${hdgst:-false}, 00:33:49.941 "ddgst": ${ddgst:-false} 00:33:49.941 }, 00:33:49.941 "method": "bdev_nvme_attach_controller" 00:33:49.941 } 00:33:49.941 EOF 00:33:49.941 )") 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:49.941 05:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:49.941 "params": { 00:33:49.941 "name": "Nvme0", 00:33:49.941 "trtype": "tcp", 00:33:49.941 "traddr": "10.0.0.2", 00:33:49.941 "adrfam": "ipv4", 00:33:49.941 "trsvcid": "4420", 00:33:49.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.941 "hdgst": false, 00:33:49.941 "ddgst": false 00:33:49.941 }, 00:33:49.941 "method": "bdev_nvme_attach_controller" 00:33:49.941 },{ 00:33:49.941 "params": { 00:33:49.941 "name": "Nvme1", 00:33:49.941 "trtype": "tcp", 00:33:49.941 "traddr": "10.0.0.2", 00:33:49.941 "adrfam": "ipv4", 00:33:49.941 "trsvcid": "4420", 00:33:49.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:49.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:49.941 "hdgst": false, 00:33:49.941 "ddgst": false 00:33:49.941 }, 00:33:49.941 "method": "bdev_nvme_attach_controller" 00:33:49.941 },{ 00:33:49.941 "params": { 00:33:49.941 "name": "Nvme2", 00:33:49.941 "trtype": "tcp", 00:33:49.941 "traddr": "10.0.0.2", 00:33:49.941 "adrfam": "ipv4", 00:33:49.941 "trsvcid": "4420", 00:33:49.941 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:49.942 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:49.942 "hdgst": false, 00:33:49.942 "ddgst": false 00:33:49.942 }, 00:33:49.942 "method": "bdev_nvme_attach_controller" 00:33:49.942 }' 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:49.942 05:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.942 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:49.942 ... 00:33:49.942 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:49.942 ... 00:33:49.942 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:49.942 ... 00:33:49.942 fio-3.35 00:33:49.942 Starting 24 threads 00:34:02.146 00:34:02.146 filename0: (groupid=0, jobs=1): err= 0: pid=882869: Tue Dec 10 05:10:51 2024 00:34:02.146 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10005msec) 00:34:02.146 slat (usec): min=7, max=120, avg=48.15, stdev=19.42 00:34:02.146 clat (usec): min=11283, max=31488, avg=26530.13, stdev=2125.76 00:34:02.146 lat (usec): min=11338, max=31543, avg=26578.28, stdev=2129.03 00:34:02.146 clat percentiles (usec): 00:34:02.146 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:34:02.146 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:02.146 | 70.00th=[27132], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540], 00:34:02.146 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:34:02.146 | 99.99th=[31589] 00:34:02.146 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2371.11, stdev=137.27, samples=19 00:34:02.146 iops : min= 544, max= 672, avg=592.74, stdev=34.30, samples=19 00:34:02.146 lat (msec) : 20=0.54%, 50=99.46% 00:34:02.146 cpu : usr=98.97%, sys=0.64%, ctx=16, majf=0, minf=9 00:34:02.146 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.146 filename0: (groupid=0, jobs=1): err= 0: pid=882870: Tue Dec 10 05:10:51 2024 00:34:02.146 read: IOPS=595, BW=2382KiB/s (2439kB/s)(23.3MiB/10022msec) 00:34:02.146 slat (nsec): min=6384, max=99499, avg=30709.25, stdev=18755.51 00:34:02.146 clat (usec): min=6641, max=31386, avg=26555.46, stdev=2508.63 00:34:02.146 lat (usec): min=6671, max=31444, avg=26586.17, stdev=2509.60 00:34:02.146 clat percentiles (usec): 00:34:02.146 | 1.00th=[15926], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.146 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.146 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.146 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31327], 00:34:02.146 | 99.99th=[31327] 00:34:02.146 bw ( KiB/s): min= 2048, max= 2688, per=4.19%, avg=2380.55, stdev=157.51, samples=20 00:34:02.146 iops : min= 512, max= 672, avg=595.10, stdev=39.37, samples=20 00:34:02.146 lat (msec) : 10=0.27%, 20=0.97%, 50=98.76% 00:34:02.146 cpu : usr=98.91%, sys=0.69%, ctx=38, majf=0, minf=9 00:34:02.146 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.146 filename0: (groupid=0, jobs=1): err= 0: pid=882871: Tue Dec 10 05:10:51 2024 00:34:02.146 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:34:02.146 slat (usec): min=6, max=114, avg=45.40, stdev=20.64 00:34:02.146 clat (usec): min=9077, max=46055, avg=26596.97, stdev=2366.41 00:34:02.146 lat (usec): min=9096, max=46072, avg=26642.37, stdev=2368.56 00:34:02.146 clat percentiles (usec): 00:34:02.146 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.146 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:02.146 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.146 | 99.00th=[31065], 99.50th=[31065], 99.90th=[45876], 99.95th=[45876], 00:34:02.146 | 99.99th=[45876] 00:34:02.146 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.63, stdev=150.05, samples=19 00:34:02.146 iops : min= 544, max= 672, avg=591.16, stdev=37.51, samples=19 00:34:02.146 lat (msec) : 10=0.27%, 50=99.73% 00:34:02.146 cpu : usr=98.86%, sys=0.70%, ctx=41, majf=0, minf=9 00:34:02.146 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.146 filename0: (groupid=0, jobs=1): err= 0: pid=882872: Tue Dec 10 05:10:51 2024 00:34:02.146 read: IOPS=591, BW=2368KiB/s (2425kB/s)(23.1MiB/10001msec) 00:34:02.146 slat (usec): min=8, max=111, avg=47.07, stdev=19.45 00:34:02.146 clat (usec): min=9101, max=44543, avg=26583.24, stdev=2350.59 00:34:02.146 lat (usec): min=9120, max=44581, avg=26630.31, stdev=2351.03 00:34:02.146 clat percentiles (usec): 00:34:02.146 | 1.00th=[22676], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.146 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:02.146 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.146 | 99.00th=[31065], 99.50th=[31065], 99.90th=[44303], 99.95th=[44303], 00:34:02.146 | 99.99th=[44303] 00:34:02.146 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2364.63, stdev=130.59, samples=19 00:34:02.146 iops : min= 544, max= 640, avg=591.16, stdev=32.65, samples=19 00:34:02.146 lat (msec) : 10=0.27%, 50=99.73% 00:34:02.146 cpu : usr=98.53%, sys=0.95%, ctx=39, majf=0, minf=9 00:34:02.146 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.146 filename0: (groupid=0, jobs=1): err= 0: pid=882873: Tue Dec 10 05:10:51 2024 00:34:02.146 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec) 00:34:02.146 slat (usec): min=6, max=122, avg=44.62, stdev=20.51 00:34:02.146 clat (usec): min=9081, max=45014, avg=26625.42, stdev=2340.99 00:34:02.146 lat (usec): min=9094, max=45035, avg=26670.05, stdev=2343.41 00:34:02.146 clat percentiles (usec): 00:34:02.146 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:34:02.146 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:02.146 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.146 | 99.00th=[31065], 99.50th=[31065], 99.90th=[44827], 99.95th=[44827], 00:34:02.146 | 99.99th=[44827] 00:34:02.146 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.84, stdev=149.78, samples=19 00:34:02.146 iops : min= 544, max= 672, avg=591.21, stdev=37.44, samples=19 00:34:02.146 lat (msec) : 10=0.27%, 50=99.73% 00:34:02.146 cpu : usr=98.53%, sys=0.89%, ctx=84, majf=0, minf=9 00:34:02.146 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.146 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.146 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.146 filename0: (groupid=0, jobs=1): err= 0: pid=882874: Tue Dec 10 05:10:51 2024 00:34:02.146 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10004msec) 00:34:02.146 slat (nsec): min=7594, max=74212, avg=18385.82, stdev=8454.50 00:34:02.146 clat (usec): min=11842, max=31579, avg=26809.62, stdev=2148.68 00:34:02.146 lat (usec): min=11855, max=31602, avg=26828.01, stdev=2147.82 00:34:02.146 clat percentiles (usec): 00:34:02.146 | 1.00th=[23200], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:34:02.146 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:34:02.146 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.146 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:34:02.146 | 99.99th=[31589] 00:34:02.146 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2371.11, stdev=137.27, samples=19 00:34:02.146 iops : min= 544, max= 672, avg=592.74, stdev=34.30, samples=19 00:34:02.146 lat (msec) : 20=0.54%, 50=99.46% 00:34:02.146 cpu : usr=98.60%, sys=0.87%, ctx=69, majf=0, minf=9 00:34:02.146 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename0: (groupid=0, jobs=1): err= 0: pid=882875: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec) 00:34:02.147 slat (usec): min=6, max=112, avg=44.95, stdev=20.36 00:34:02.147 clat (usec): min=9118, max=45616, avg=26602.50, stdev=2356.21 00:34:02.147 lat (usec): min=9133, max=45633, avg=26647.45, stdev=2358.20 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.147 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:02.147 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.147 | 99.00th=[31065], 99.50th=[31065], 99.90th=[45351], 99.95th=[45351], 00:34:02.147 | 99.99th=[45876] 00:34:02.147 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.63, stdev=156.00, samples=19 00:34:02.147 iops : min= 544, max= 672, avg=591.16, stdev=39.00, samples=19 00:34:02.147 lat (msec) : 10=0.27%, 50=99.73% 00:34:02.147 cpu : usr=98.60%, sys=0.90%, ctx=66, majf=0, minf=9 00:34:02.147 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename0: (groupid=0, jobs=1): err= 0: pid=882876: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:34:02.147 slat (nsec): min=5951, max=95644, avg=35748.01, stdev=19781.17 00:34:02.147 clat (usec): min=13699, max=36286, avg=26693.48, stdev=2105.94 00:34:02.147 lat (usec): min=13709, max=36347, avg=26729.23, stdev=2104.72 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.147 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.147 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[30540], 00:34:02.147 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:34:02.147 | 99.99th=[36439] 00:34:02.147 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.84, stdev=130.49, samples=19 00:34:02.147 iops : min= 544, max= 672, avg=591.21, stdev=32.62, samples=19 00:34:02.147 lat (msec) : 20=0.30%, 50=99.70% 00:34:02.147 cpu : usr=97.96%, sys=1.19%, ctx=316, majf=0, minf=9 00:34:02.147 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename1: (groupid=0, jobs=1): err= 0: pid=882877: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec) 00:34:02.147 slat (nsec): min=6482, max=90094, avg=40353.18, stdev=16472.34 00:34:02.147 clat (usec): min=8542, max=45829, avg=26714.67, stdev=2390.91 00:34:02.147 lat (usec): min=8567, max=45848, avg=26755.03, stdev=2390.62 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.147 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.147 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.147 | 99.00th=[31065], 99.50th=[31327], 99.90th=[45876], 99.95th=[45876], 00:34:02.147 | 99.99th=[45876] 00:34:02.147 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.63, stdev=156.00, samples=19 00:34:02.147 iops : min= 544, max= 672, avg=591.16, stdev=39.00, samples=19 00:34:02.147 lat (msec) : 10=0.27%, 50=99.73% 00:34:02.147 cpu : usr=98.43%, sys=1.00%, ctx=96, majf=0, minf=9 00:34:02.147 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename1: (groupid=0, jobs=1): err= 0: pid=882878: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:34:02.147 slat (nsec): min=5827, max=90205, avg=33347.25, stdev=17014.44 00:34:02.147 clat (usec): min=21295, max=31484, avg=26788.90, stdev=1992.38 00:34:02.147 lat (usec): min=21310, max=31502, avg=26822.25, stdev=1991.43 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:34:02.147 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870], 00:34:02.147 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30016], 95.00th=[30802], 00:34:02.147 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:34:02.147 | 99.99th=[31589] 00:34:02.147 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.63, stdev=137.39, samples=19 00:34:02.147 iops : min= 544, max= 672, avg=591.16, stdev=34.35, samples=19 00:34:02.147 lat (msec) : 50=100.00% 00:34:02.147 cpu : usr=98.13%, sys=1.24%, ctx=92, majf=0, minf=9 00:34:02.147 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename1: (groupid=0, jobs=1): err= 0: pid=882879: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=595, BW=2382KiB/s (2439kB/s)(23.3MiB/10022msec) 00:34:02.147 slat (nsec): min=7270, max=99451, avg=31550.32, stdev=18430.87 00:34:02.147 clat (usec): min=6571, max=35090, avg=26586.70, stdev=2533.52 00:34:02.147 lat (usec): min=6591, max=35144, avg=26618.25, stdev=2534.17 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[16319], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.147 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.147 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.147 | 99.00th=[31065], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:34:02.147 | 99.99th=[34866] 00:34:02.147 bw ( KiB/s): min= 2048, max= 2688, per=4.19%, avg=2380.55, stdev=157.51, samples=20 00:34:02.147 iops : min= 512, max= 672, avg=595.10, stdev=39.37, samples=20 00:34:02.147 lat (msec) : 10=0.23%, 20=1.04%, 50=98.73% 00:34:02.147 cpu : usr=98.58%, sys=0.89%, ctx=65, majf=0, minf=9 00:34:02.147 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename1: (groupid=0, jobs=1): err= 0: pid=882880: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=593, BW=2376KiB/s (2433kB/s)(23.2MiB/10022msec) 00:34:02.147 slat (nsec): min=7542, max=65922, avg=19772.22, stdev=11141.33 00:34:02.147 clat (usec): min=10913, max=31341, avg=26781.81, stdev=2218.30 00:34:02.147 lat (usec): min=10955, max=31366, avg=26801.58, stdev=2217.35 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:34:02.147 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:34:02.147 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.147 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:34:02.147 | 99.99th=[31327] 00:34:02.147 bw ( KiB/s): min= 2048, max= 2688, per=4.18%, avg=2374.15, stdev=152.35, samples=20 00:34:02.147 iops : min= 512, max= 672, avg=593.50, stdev=38.07, samples=20 00:34:02.147 lat (msec) : 20=0.54%, 50=99.46% 00:34:02.147 cpu : usr=98.37%, sys=1.07%, ctx=88, majf=0, minf=9 00:34:02.147 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename1: (groupid=0, jobs=1): err= 0: pid=882881: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec) 00:34:02.147 slat (nsec): min=8005, max=83560, avg=35952.94, stdev=17025.88 00:34:02.147 clat (usec): min=8614, max=45072, avg=26766.64, stdev=2389.31 00:34:02.147 lat (usec): min=8657, max=45090, avg=26802.60, stdev=2388.25 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.147 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870], 00:34:02.147 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.147 | 99.00th=[31065], 99.50th=[31327], 99.90th=[44827], 99.95th=[44827], 00:34:02.147 | 99.99th=[44827] 00:34:02.147 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.84, stdev=155.74, samples=19 00:34:02.147 iops : min= 544, max= 672, avg=591.21, stdev=38.93, samples=19 00:34:02.147 lat (msec) : 10=0.27%, 50=99.73% 00:34:02.147 cpu : usr=98.62%, sys=0.95%, ctx=60, majf=0, minf=9 00:34:02.147 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:02.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.147 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.147 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.147 filename1: (groupid=0, jobs=1): err= 0: pid=882882: Tue Dec 10 05:10:51 2024 00:34:02.147 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10005msec) 00:34:02.147 slat (nsec): min=6103, max=94736, avg=35221.51, stdev=18805.44 00:34:02.147 clat (usec): min=12923, max=31364, avg=26629.34, stdev=2138.15 00:34:02.147 lat (usec): min=12933, max=31399, avg=26664.56, stdev=2137.56 00:34:02.147 clat percentiles (usec): 00:34:02.147 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.147 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.147 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.147 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31327], 00:34:02.147 | 99.99th=[31327] 00:34:02.147 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2371.11, stdev=130.74, samples=19 00:34:02.147 iops : min= 544, max= 672, avg=592.74, stdev=32.71, samples=19 00:34:02.147 lat (msec) : 20=0.54%, 50=99.46% 00:34:02.147 cpu : usr=98.82%, sys=0.74%, ctx=59, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename1: (groupid=0, jobs=1): err= 0: pid=882883: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=595, BW=2380KiB/s (2437kB/s)(23.2MiB/10002msec) 00:34:02.148 slat (nsec): min=7439, max=74150, avg=14620.16, stdev=6332.90 00:34:02.148 clat (usec): min=5278, max=31542, avg=26759.80, stdev=2475.32 00:34:02.148 lat (usec): min=5287, max=31565, avg=26774.42, stdev=2475.37 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[18744], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:34:02.148 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:34:02.148 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.148 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:34:02.148 | 99.99th=[31589] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2384.84, stdev=149.09, samples=19 00:34:02.148 iops : min= 544, max= 672, avg=596.21, stdev=37.27, samples=19 00:34:02.148 lat (msec) : 10=0.27%, 20=0.84%, 50=98.89% 00:34:02.148 cpu : usr=98.50%, sys=1.03%, ctx=82, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename1: (groupid=0, jobs=1): err= 0: pid=882884: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec) 00:34:02.148 slat (nsec): min=7959, max=76530, avg=31852.58, stdev=15327.04 00:34:02.148 clat (usec): min=18482, max=33100, avg=26785.64, stdev=2048.24 00:34:02.148 lat (usec): min=18494, max=33126, avg=26817.49, stdev=2045.05 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:34:02.148 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870], 00:34:02.148 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[30802], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[33162], 99.95th=[33162], 00:34:02.148 | 99.99th=[33162] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.17%, avg=2371.11, stdev=137.53, samples=19 00:34:02.148 iops : min= 544, max= 672, avg=592.74, stdev=34.40, samples=19 00:34:02.148 lat (msec) : 20=0.27%, 50=99.73% 00:34:02.148 cpu : usr=98.41%, sys=1.16%, ctx=50, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename2: (groupid=0, jobs=1): err= 0: pid=882885: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:34:02.148 slat (nsec): min=5776, max=95681, avg=35070.71, stdev=17795.63 00:34:02.148 clat (usec): min=13488, max=47022, avg=26720.05, stdev=2136.83 00:34:02.148 lat (usec): min=13497, max=47037, avg=26755.12, stdev=2134.58 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.148 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.148 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[30802], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:34:02.148 | 99.99th=[46924] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.84, stdev=130.49, samples=19 00:34:02.148 iops : min= 544, max= 672, avg=591.21, stdev=32.62, samples=19 00:34:02.148 lat (msec) : 20=0.30%, 50=99.70% 00:34:02.148 cpu : usr=98.46%, sys=1.00%, ctx=66, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename2: (groupid=0, jobs=1): err= 0: pid=882886: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10006msec) 00:34:02.148 slat (nsec): min=6944, max=63346, avg=12853.46, stdev=5965.53 00:34:02.148 clat (usec): min=9511, max=31639, avg=26785.95, stdev=2411.43 00:34:02.148 lat (usec): min=9519, max=31665, avg=26798.81, stdev=2411.26 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[18744], 5.00th=[24511], 10.00th=[25035], 20.00th=[25297], 00:34:02.148 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:34:02.148 | 70.00th=[27395], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:34:02.148 | 99.99th=[31589] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2378.11, stdev=137.04, samples=19 00:34:02.148 iops : min= 544, max= 672, avg=594.53, stdev=34.26, samples=19 00:34:02.148 lat (msec) : 10=0.24%, 20=0.84%, 50=98.92% 00:34:02.148 cpu : usr=98.44%, sys=1.03%, ctx=41, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename2: (groupid=0, jobs=1): err= 0: pid=882887: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=594, BW=2378KiB/s (2435kB/s)(23.2MiB/10011msec) 00:34:02.148 slat (nsec): min=7124, max=76118, avg=22970.42, stdev=12852.96 00:34:02.148 clat (usec): min=11260, max=31573, avg=26738.05, stdev=2323.11 00:34:02.148 lat (usec): min=11274, max=31591, avg=26761.02, stdev=2322.36 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[18744], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:34:02.148 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[27132], 00:34:02.148 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[30802], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:34:02.148 | 99.99th=[31589] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2374.15, stdev=146.58, samples=20 00:34:02.148 iops : min= 544, max= 672, avg=593.50, stdev=36.63, samples=20 00:34:02.148 lat (msec) : 20=1.04%, 50=98.96% 00:34:02.148 cpu : usr=98.51%, sys=0.95%, ctx=58, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename2: (groupid=0, jobs=1): err= 0: pid=882888: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=591, BW=2365KiB/s (2422kB/s)(23.1MiB/10011msec) 00:34:02.148 slat (nsec): min=7543, max=74235, avg=30580.55, stdev=14917.44 00:34:02.148 clat (usec): min=16750, max=39732, avg=26792.69, stdev=2058.10 00:34:02.148 lat (usec): min=16760, max=39751, avg=26823.27, stdev=2056.88 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[23200], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:34:02.148 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26608], 60.00th=[26870], 00:34:02.148 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[34866], 99.95th=[34866], 00:34:02.148 | 99.99th=[39584] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2367.20, stdev=127.63, samples=20 00:34:02.148 iops : min= 544, max= 672, avg=591.80, stdev=31.91, samples=20 00:34:02.148 lat (msec) : 20=0.14%, 50=99.86% 00:34:02.148 cpu : usr=97.85%, sys=1.38%, ctx=98, majf=0, minf=9 00:34:02.148 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename2: (groupid=0, jobs=1): err= 0: pid=882889: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:34:02.148 slat (nsec): min=3672, max=90094, avg=42498.98, stdev=15782.39 00:34:02.148 clat (usec): min=20495, max=31820, avg=26684.67, stdev=1994.17 00:34:02.148 lat (usec): min=20512, max=31834, avg=26727.17, stdev=1994.57 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.148 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.148 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:34:02.148 | 99.99th=[31851] 00:34:02.148 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.63, stdev=137.39, samples=19 00:34:02.148 iops : min= 544, max= 672, avg=591.16, stdev=34.35, samples=19 00:34:02.148 lat (msec) : 50=100.00% 00:34:02.148 cpu : usr=98.24%, sys=1.22%, ctx=40, majf=0, minf=9 00:34:02.148 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.148 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.148 filename2: (groupid=0, jobs=1): err= 0: pid=882890: Tue Dec 10 05:10:51 2024 00:34:02.148 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec) 00:34:02.148 slat (nsec): min=4131, max=90091, avg=43123.31, stdev=15604.71 00:34:02.148 clat (usec): min=20417, max=42016, avg=26666.02, stdev=2040.34 00:34:02.148 lat (usec): min=20449, max=42029, avg=26709.14, stdev=2041.24 00:34:02.148 clat percentiles (usec): 00:34:02.148 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.148 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.148 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30016], 95.00th=[30540], 00:34:02.148 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:34:02.148 | 99.99th=[42206] 00:34:02.149 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.63, stdev=137.39, samples=19 00:34:02.149 iops : min= 544, max= 672, avg=591.16, stdev=34.35, samples=19 00:34:02.149 lat (msec) : 50=100.00% 00:34:02.149 cpu : usr=98.42%, sys=1.00%, ctx=63, majf=0, minf=9 00:34:02.149 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:02.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.149 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.149 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.149 filename2: (groupid=0, jobs=1): err= 0: pid=882891: Tue Dec 10 05:10:51 2024 00:34:02.149 read: IOPS=593, BW=2372KiB/s (2429kB/s)(23.2MiB/10009msec) 00:34:02.149 slat (usec): min=3, max=103, avg=38.25, stdev=19.59 00:34:02.149 clat (usec): min=8959, max=33363, avg=26618.63, stdev=2278.33 00:34:02.149 lat (usec): min=8966, max=33375, avg=26656.88, stdev=2275.26 00:34:02.149 clat percentiles (usec): 00:34:02.149 | 1.00th=[22676], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:02.149 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.149 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[30540], 00:34:02.149 | 99.00th=[31065], 99.50th=[31327], 99.90th=[33424], 99.95th=[33424], 00:34:02.149 | 99.99th=[33424] 00:34:02.149 bw ( KiB/s): min= 2176, max= 2688, per=4.16%, avg=2364.32, stdev=136.92, samples=19 00:34:02.149 iops : min= 544, max= 672, avg=591.05, stdev=34.19, samples=19 00:34:02.149 lat (msec) : 10=0.27%, 20=0.27%, 50=99.46% 00:34:02.149 cpu : usr=98.77%, sys=0.81%, ctx=35, majf=0, minf=9 00:34:02.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.149 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.149 filename2: (groupid=0, jobs=1): err= 0: pid=882892: Tue Dec 10 05:10:51 2024 00:34:02.149 read: IOPS=593, BW=2376KiB/s (2433kB/s)(23.2MiB/10022msec) 00:34:02.149 slat (nsec): min=7778, max=84632, avg=27972.33, stdev=14496.05 00:34:02.149 clat (usec): min=10053, max=37810, avg=26723.59, stdev=2249.43 00:34:02.149 lat (usec): min=10120, max=37856, avg=26751.56, stdev=2246.75 00:34:02.149 clat percentiles (usec): 00:34:02.149 | 1.00th=[22938], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:34:02.149 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:34:02.149 | 70.00th=[27132], 80.00th=[28967], 90.00th=[30278], 95.00th=[30802], 00:34:02.149 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31589], 00:34:02.149 | 99.99th=[38011] 00:34:02.149 bw ( KiB/s): min= 2048, max= 2688, per=4.18%, avg=2374.15, stdev=152.35, samples=20 00:34:02.149 iops : min= 512, max= 672, avg=593.50, stdev=38.07, samples=20 00:34:02.149 lat (msec) : 20=0.57%, 50=99.43% 00:34:02.149 cpu : usr=98.31%, sys=0.99%, ctx=160, majf=0, minf=9 00:34:02.149 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.149 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.149 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.149 00:34:02.149 Run status group 0 (all jobs): 00:34:02.149 READ: bw=55.5MiB/s (58.2MB/s), 2365KiB/s-2382KiB/s (2422kB/s-2439kB/s), io=556MiB (583MB), run=10001-10022msec 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 bdev_null0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 [2024-12-10 05:10:51.767460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 bdev_null1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.149 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.150 { 00:34:02.150 "params": { 00:34:02.150 "name": "Nvme$subsystem", 00:34:02.150 "trtype": "$TEST_TRANSPORT", 00:34:02.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.150 "adrfam": "ipv4", 00:34:02.150 "trsvcid": "$NVMF_PORT", 00:34:02.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.150 "hdgst": ${hdgst:-false}, 00:34:02.150 "ddgst": ${ddgst:-false} 00:34:02.150 }, 00:34:02.150 "method": "bdev_nvme_attach_controller" 00:34:02.150 } 00:34:02.150 EOF 00:34:02.150 )") 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.150 { 00:34:02.150 "params": { 00:34:02.150 "name": "Nvme$subsystem", 00:34:02.150 "trtype": "$TEST_TRANSPORT", 00:34:02.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.150 "adrfam": "ipv4", 00:34:02.150 "trsvcid": "$NVMF_PORT", 00:34:02.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.150 "hdgst": ${hdgst:-false}, 00:34:02.150 "ddgst": ${ddgst:-false} 00:34:02.150 }, 00:34:02.150 "method": "bdev_nvme_attach_controller" 00:34:02.150 } 00:34:02.150 EOF 00:34:02.150 )") 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.150 "params": { 00:34:02.150 "name": "Nvme0", 00:34:02.150 "trtype": "tcp", 00:34:02.150 "traddr": "10.0.0.2", 00:34:02.150 "adrfam": "ipv4", 00:34:02.150 "trsvcid": "4420", 00:34:02.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.150 "hdgst": false, 00:34:02.150 "ddgst": false 00:34:02.150 }, 00:34:02.150 "method": "bdev_nvme_attach_controller" 00:34:02.150 },{ 00:34:02.150 "params": { 00:34:02.150 "name": "Nvme1", 00:34:02.150 "trtype": "tcp", 00:34:02.150 "traddr": "10.0.0.2", 00:34:02.150 "adrfam": "ipv4", 00:34:02.150 "trsvcid": "4420", 00:34:02.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.150 "hdgst": false, 00:34:02.150 "ddgst": false 00:34:02.150 }, 00:34:02.150 "method": "bdev_nvme_attach_controller" 00:34:02.150 }' 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:02.150 05:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.150 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:02.150 ... 00:34:02.150 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:02.150 ... 00:34:02.150 fio-3.35 00:34:02.150 Starting 4 threads 00:34:07.420 00:34:07.420 filename0: (groupid=0, jobs=1): err= 0: pid=884791: Tue Dec 10 05:10:57 2024 00:34:07.420 read: IOPS=2874, BW=22.5MiB/s (23.5MB/s)(112MiB/5003msec) 00:34:07.420 slat (nsec): min=6030, max=67476, avg=11711.63, stdev=7077.12 00:34:07.420 clat (usec): min=741, max=5614, avg=2744.14, stdev=440.43 00:34:07.420 lat (usec): min=748, max=5626, avg=2755.86, stdev=441.00 00:34:07.420 clat percentiles (usec): 00:34:07.420 | 1.00th=[ 1598], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2409], 00:34:07.420 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2868], 00:34:07.420 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3392], 00:34:07.420 | 99.00th=[ 3818], 99.50th=[ 4178], 99.90th=[ 4883], 99.95th=[ 5014], 00:34:07.420 | 99.99th=[ 5604] 00:34:07.420 bw ( KiB/s): min=21216, max=25440, per=27.29%, avg=22998.40, stdev=1322.11, samples=10 00:34:07.420 iops : min= 2652, max= 3180, avg=2874.80, stdev=165.26, samples=10 00:34:07.420 lat (usec) : 750=0.01%, 1000=0.07% 00:34:07.420 lat (msec) : 2=4.01%, 4=95.18%, 10=0.72% 00:34:07.420 cpu : usr=96.98%, sys=2.66%, ctx=7, majf=0, minf=9 00:34:07.420 IO depths : 1=0.4%, 2=13.4%, 4=58.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.420 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.420 issued rwts: total=14382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:07.420 filename0: (groupid=0, jobs=1): err= 0: pid=884792: Tue Dec 10 05:10:57 2024 00:34:07.420 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:34:07.420 slat (nsec): min=6078, max=67314, avg=12797.25, stdev=8040.02 00:34:07.420 clat (usec): min=611, max=6127, avg=3081.50, stdev=507.83 00:34:07.420 lat (usec): min=639, max=6137, avg=3094.29, stdev=507.51 00:34:07.421 clat percentiles (usec): 00:34:07.421 | 1.00th=[ 1975], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:34:07.421 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3097], 00:34:07.421 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3687], 95.00th=[ 4047], 00:34:07.421 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5932], 00:34:07.421 | 99.99th=[ 6128] 00:34:07.421 bw ( KiB/s): min=19968, max=21136, per=24.37%, avg=20539.56, stdev=382.85, samples=9 00:34:07.421 iops : min= 2496, max= 2642, avg=2567.44, stdev=47.86, samples=9 00:34:07.421 lat (usec) : 750=0.02%, 1000=0.05% 00:34:07.421 lat (msec) : 2=1.01%, 4=93.57%, 10=5.34% 00:34:07.421 cpu : usr=96.64%, sys=3.00%, ctx=10, majf=0, minf=9 00:34:07.421 IO depths : 1=0.3%, 2=6.7%, 4=64.3%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.421 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.421 issued rwts: total=12821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:07.421 filename1: (groupid=0, jobs=1): err= 0: pid=884793: Tue Dec 10 05:10:57 2024 00:34:07.421 read: IOPS=2533, BW=19.8MiB/s (20.8MB/s)(99.0MiB/5001msec) 00:34:07.421 slat (nsec): min=6037, max=67205, avg=13105.95, stdev=8016.68 00:34:07.421 clat (usec): min=559, max=6457, avg=3116.29, stdev=531.23 00:34:07.421 lat (usec): min=568, max=6463, avg=3129.40, stdev=530.85 00:34:07.421 clat percentiles (usec): 00:34:07.421 | 1.00th=[ 1893], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2769], 00:34:07.421 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3130], 00:34:07.421 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4113], 00:34:07.421 | 99.00th=[ 4948], 99.50th=[ 5276], 99.90th=[ 5800], 99.95th=[ 5932], 00:34:07.421 | 99.99th=[ 6456] 00:34:07.421 bw ( KiB/s): min=19296, max=20880, per=24.03%, avg=20252.44, stdev=648.93, samples=9 00:34:07.421 iops : min= 2412, max= 2610, avg=2531.56, stdev=81.12, samples=9 00:34:07.421 lat (usec) : 750=0.04%, 1000=0.05% 00:34:07.421 lat (msec) : 2=1.10%, 4=92.38%, 10=6.42% 00:34:07.421 cpu : usr=96.62%, sys=3.00%, ctx=17, majf=0, minf=9 00:34:07.421 IO depths : 1=0.2%, 2=7.3%, 4=64.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.421 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.421 issued rwts: total=12672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:07.421 filename1: (groupid=0, jobs=1): err= 0: pid=884795: Tue Dec 10 05:10:57 2024 00:34:07.421 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:34:07.421 slat (nsec): min=6058, max=58838, avg=15018.37, stdev=9761.96 00:34:07.421 clat (usec): min=626, max=5563, avg=3074.63, stdev=487.72 00:34:07.421 lat (usec): min=637, max=5598, avg=3089.65, stdev=487.78 00:34:07.421 clat percentiles (usec): 00:34:07.421 | 1.00th=[ 1926], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:34:07.421 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3097], 00:34:07.421 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3654], 95.00th=[ 3982], 00:34:07.421 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5407], 00:34:07.421 | 99.99th=[ 5538] 00:34:07.421 bw ( KiB/s): min=19168, max=22000, per=24.42%, avg=20581.33, stdev=913.75, samples=9 00:34:07.421 iops : min= 2396, max= 2750, avg=2572.67, stdev=114.22, samples=9 00:34:07.421 lat (usec) : 750=0.02%, 1000=0.02% 00:34:07.421 lat (msec) : 2=1.33%, 4=93.94%, 10=4.70% 00:34:07.421 cpu : usr=96.60%, sys=3.02%, ctx=20, majf=0, minf=9 00:34:07.421 IO depths : 1=0.4%, 2=7.0%, 4=64.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.421 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.421 issued rwts: total=12821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:07.421 00:34:07.421 Run status group 0 (all jobs): 00:34:07.421 READ: bw=82.3MiB/s (86.3MB/s), 19.8MiB/s-22.5MiB/s (20.8MB/s-23.5MB/s), io=412MiB (432MB), run=5001-5003msec 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 00:34:07.421 real 0m24.177s 00:34:07.421 user 4m52.519s 00:34:07.421 sys 0m4.767s 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 ************************************ 00:34:07.421 END TEST fio_dif_rand_params 00:34:07.421 ************************************ 00:34:07.421 05:10:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:07.421 05:10:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:07.421 05:10:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 ************************************ 00:34:07.421 START TEST fio_dif_digest 00:34:07.421 ************************************ 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 bdev_null0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:07.421 [2024-12-10 05:10:58.275086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:07.421 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.422 { 00:34:07.422 "params": { 00:34:07.422 "name": "Nvme$subsystem", 00:34:07.422 "trtype": "$TEST_TRANSPORT", 00:34:07.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.422 "adrfam": "ipv4", 00:34:07.422 "trsvcid": "$NVMF_PORT", 00:34:07.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.422 "hdgst": ${hdgst:-false}, 00:34:07.422 "ddgst": ${ddgst:-false} 00:34:07.422 }, 00:34:07.422 "method": "bdev_nvme_attach_controller" 00:34:07.422 } 00:34:07.422 EOF 00:34:07.422 )") 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.422 "params": { 00:34:07.422 "name": "Nvme0", 00:34:07.422 "trtype": "tcp", 00:34:07.422 "traddr": "10.0.0.2", 00:34:07.422 "adrfam": "ipv4", 00:34:07.422 "trsvcid": "4420", 00:34:07.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:07.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:07.422 "hdgst": true, 00:34:07.422 "ddgst": true 00:34:07.422 }, 00:34:07.422 "method": "bdev_nvme_attach_controller" 00:34:07.422 }' 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:07.422 05:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:07.680 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:07.680 ... 00:34:07.680 fio-3.35 00:34:07.680 Starting 3 threads 00:34:19.882 00:34:19.882 filename0: (groupid=0, jobs=1): err= 0: pid=886036: Tue Dec 10 05:11:09 2024 00:34:19.882 read: IOPS=285, BW=35.6MiB/s (37.4MB/s)(358MiB/10044msec) 00:34:19.882 slat (nsec): min=6327, max=30559, avg=11716.79, stdev=2420.09 00:34:19.882 clat (usec): min=6536, max=52570, avg=10492.39, stdev=1965.76 00:34:19.882 lat (usec): min=6543, max=52581, avg=10504.11, stdev=1965.70 00:34:19.882 clat percentiles (usec): 00:34:19.882 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:34:19.882 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:34:19.882 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:34:19.882 | 99.00th=[13042], 99.50th=[13435], 99.90th=[51119], 99.95th=[51643], 00:34:19.882 | 99.99th=[52691] 00:34:19.882 bw ( KiB/s): min=32000, max=39936, per=35.17%, avg=36630.00, stdev=2205.47, samples=20 00:34:19.882 iops : min= 250, max= 312, avg=286.15, stdev=17.24, samples=20 00:34:19.882 lat (msec) : 10=35.23%, 20=64.59%, 50=0.03%, 100=0.14% 00:34:19.882 cpu : usr=95.88%, sys=3.77%, ctx=25, majf=0, minf=24 00:34:19.882 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.882 issued rwts: total=2864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:19.882 filename0: (groupid=0, jobs=1): err= 0: pid=886037: Tue Dec 10 05:11:09 2024 00:34:19.882 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(354MiB/10046msec) 00:34:19.882 slat (nsec): min=6405, max=29599, avg=12131.12, stdev=2027.95 00:34:19.882 clat (usec): min=6304, max=54913, avg=10619.26, stdev=2045.16 00:34:19.882 lat (usec): min=6311, max=54940, avg=10631.39, stdev=2045.19 00:34:19.882 clat percentiles (usec): 00:34:19.882 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:34:19.882 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:34:19.882 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12518], 00:34:19.882 | 99.00th=[13304], 99.50th=[13829], 99.90th=[53740], 99.95th=[54789], 00:34:19.882 | 99.99th=[54789] 00:34:19.882 bw ( KiB/s): min=30208, max=39936, per=34.76%, avg=36198.40, stdev=2634.90, samples=20 00:34:19.882 iops : min= 236, max= 312, avg=282.80, stdev=20.59, samples=20 00:34:19.882 lat (msec) : 10=33.22%, 20=66.61%, 50=0.04%, 100=0.14% 00:34:19.882 cpu : usr=92.66%, sys=5.09%, ctx=707, majf=0, minf=30 00:34:19.882 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.882 issued rwts: total=2830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:19.882 filename0: (groupid=0, jobs=1): err= 0: pid=886038: Tue Dec 10 05:11:09 2024 00:34:19.882 read: IOPS=246, BW=30.9MiB/s (32.4MB/s)(310MiB/10046msec) 00:34:19.882 slat (nsec): min=6342, max=34902, avg=12342.13, stdev=2510.96 00:34:19.882 clat (usec): min=7247, max=47775, avg=12120.14, stdev=1607.67 00:34:19.882 lat (usec): min=7258, max=47785, avg=12132.48, stdev=1607.68 00:34:19.882 clat percentiles (usec): 00:34:19.882 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:34:19.882 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:34:19.882 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:34:19.882 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16319], 99.95th=[45876], 00:34:19.882 | 99.99th=[47973] 00:34:19.882 bw ( KiB/s): min=26880, max=35072, per=30.45%, avg=31718.40, stdev=1915.55, samples=20 00:34:19.882 iops : min= 210, max= 274, avg=247.80, stdev=14.97, samples=20 00:34:19.882 lat (msec) : 10=3.06%, 20=96.85%, 50=0.08% 00:34:19.882 cpu : usr=95.67%, sys=3.98%, ctx=17, majf=0, minf=41 00:34:19.882 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.882 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:19.882 00:34:19.882 Run status group 0 (all jobs): 00:34:19.882 READ: bw=102MiB/s (107MB/s), 30.9MiB/s-35.6MiB/s (32.4MB/s-37.4MB/s), io=1022MiB (1071MB), run=10044-10046msec 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.882 00:34:19.882 real 0m11.381s 00:34:19.882 user 0m35.277s 00:34:19.882 sys 0m1.684s 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.882 05:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:19.882 ************************************ 00:34:19.882 END TEST fio_dif_digest 00:34:19.882 ************************************ 00:34:19.882 05:11:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:19.882 05:11:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.882 rmmod nvme_tcp 00:34:19.882 rmmod nvme_fabrics 00:34:19.882 rmmod nvme_keyring 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 877645 ']' 00:34:19.882 05:11:09 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 877645 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 877645 ']' 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 877645 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 877645 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.882 05:11:09 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 877645' 00:34:19.882 killing process with pid 877645 00:34:19.883 05:11:09 nvmf_dif -- common/autotest_common.sh@973 -- # kill 877645 00:34:19.883 05:11:09 nvmf_dif -- common/autotest_common.sh@978 -- # wait 877645 00:34:19.883 05:11:09 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:19.883 05:11:09 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:21.789 Waiting for block devices as requested 00:34:21.789 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:21.789 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:21.789 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:21.789 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:22.048 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:22.048 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:22.048 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:22.307 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:22.307 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:22.307 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:22.307 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:22.565 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:22.565 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:22.565 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:22.824 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:22.824 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:22.824 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.083 05:11:13 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.083 05:11:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:23.083 05:11:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.988 05:11:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.988 00:34:24.988 real 1m13.795s 00:34:24.988 user 7m9.198s 00:34:24.988 sys 0m20.246s 00:34:24.988 05:11:16 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.988 05:11:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.988 ************************************ 00:34:24.988 END TEST nvmf_dif 00:34:24.988 ************************************ 00:34:24.988 05:11:16 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:24.988 05:11:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:24.988 05:11:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.988 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:34:24.988 ************************************ 00:34:24.988 START TEST nvmf_abort_qd_sizes 00:34:24.988 ************************************ 00:34:24.988 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:25.248 * Looking for test storage... 00:34:25.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.248 --rc genhtml_branch_coverage=1 00:34:25.248 --rc genhtml_function_coverage=1 00:34:25.248 --rc genhtml_legend=1 00:34:25.248 --rc geninfo_all_blocks=1 00:34:25.248 --rc geninfo_unexecuted_blocks=1 00:34:25.248 00:34:25.248 ' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.248 --rc genhtml_branch_coverage=1 00:34:25.248 --rc genhtml_function_coverage=1 00:34:25.248 --rc genhtml_legend=1 00:34:25.248 --rc geninfo_all_blocks=1 00:34:25.248 --rc geninfo_unexecuted_blocks=1 00:34:25.248 00:34:25.248 ' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.248 --rc genhtml_branch_coverage=1 00:34:25.248 --rc genhtml_function_coverage=1 00:34:25.248 --rc genhtml_legend=1 00:34:25.248 --rc geninfo_all_blocks=1 00:34:25.248 --rc geninfo_unexecuted_blocks=1 00:34:25.248 00:34:25.248 ' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:25.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.248 --rc genhtml_branch_coverage=1 00:34:25.248 --rc genhtml_function_coverage=1 00:34:25.248 --rc genhtml_legend=1 00:34:25.248 --rc geninfo_all_blocks=1 00:34:25.248 --rc geninfo_unexecuted_blocks=1 00:34:25.248 00:34:25.248 ' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.248 05:11:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:25.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.249 05:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:31.818 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:31.818 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:31.818 Found net devices under 0000:af:00.0: cvl_0_0 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:31.818 Found net devices under 0000:af:00.1: cvl_0_1 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.818 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.819 05:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:31.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:34:31.819 00:34:31.819 --- 10.0.0.2 ping statistics --- 00:34:31.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.819 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:34:31.819 00:34:31.819 --- 10.0.0.1 ping statistics --- 00:34:31.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.819 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:31.819 05:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:34.355 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:34.355 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:34.921 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:34.921 05:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.921 05:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.921 05:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.921 05:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.921 05:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.921 05:11:25 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=893891 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 893891 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 893891 ']' 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.921 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.180 [2024-12-10 05:11:26.090257] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:34:35.180 [2024-12-10 05:11:26.090302] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.180 [2024-12-10 05:11:26.168325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.180 [2024-12-10 05:11:26.210171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.180 [2024-12-10 05:11:26.210208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.180 [2024-12-10 05:11:26.210215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.180 [2024-12-10 05:11:26.210222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.181 [2024-12-10 05:11:26.210227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.181 [2024-12-10 05:11:26.211693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.181 [2024-12-10 05:11:26.211801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.181 [2024-12-10 05:11:26.211910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.181 [2024-12-10 05:11:26.211911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.181 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.181 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:35.181 05:11:26 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.181 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.181 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.440 05:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.440 ************************************ 00:34:35.440 START TEST spdk_target_abort 00:34:35.440 ************************************ 00:34:35.440 05:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:35.440 05:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:35.440 05:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:35.440 05:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.440 05:11:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.729 spdk_targetn1 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.729 [2024-12-10 05:11:29.226379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:38.729 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.730 [2024-12-10 05:11:29.278680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:38.730 05:11:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.155 Initializing NVMe Controllers 00:34:42.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:42.155 Initialization complete. Launching workers. 00:34:42.155 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15235, failed: 0 00:34:42.155 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1381, failed to submit 13854 00:34:42.155 success 740, unsuccessful 641, failed 0 00:34:42.155 05:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.155 05:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.458 Initializing NVMe Controllers 00:34:45.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.458 Initialization complete. Launching workers. 00:34:45.458 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8819, failed: 0 00:34:45.458 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 7569 00:34:45.458 success 325, unsuccessful 925, failed 0 00:34:45.458 05:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.459 05:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.748 Initializing NVMe Controllers 00:34:48.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.749 Initialization complete. Launching workers. 00:34:48.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38982, failed: 0 00:34:48.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2696, failed to submit 36286 00:34:48.749 success 573, unsuccessful 2123, failed 0 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.749 05:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.684 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 893891 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 893891 ']' 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 893891 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 893891 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 893891' 00:34:49.685 killing process with pid 893891 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 893891 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 893891 00:34:49.685 00:34:49.685 real 0m14.382s 00:34:49.685 user 0m54.781s 00:34:49.685 sys 0m2.650s 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.685 05:11:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.685 ************************************ 00:34:49.685 END TEST spdk_target_abort 00:34:49.685 ************************************ 00:34:49.685 05:11:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:49.685 05:11:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:49.685 05:11:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.685 05:11:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:49.944 ************************************ 00:34:49.944 START TEST kernel_target_abort 00:34:49.944 ************************************ 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:49.944 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:49.945 05:11:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:52.481 Waiting for block devices as requested 00:34:52.481 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:52.741 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:52.741 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.000 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.000 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.000 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.000 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.258 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.258 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.258 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:53.517 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.517 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.517 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.517 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.775 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.775 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.775 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:54.035 05:11:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:54.035 No valid GPT data, bailing 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:54.035 00:34:54.035 Discovery Log Number of Records 2, Generation counter 2 00:34:54.035 =====Discovery Log Entry 0====== 00:34:54.035 trtype: tcp 00:34:54.035 adrfam: ipv4 00:34:54.035 subtype: current discovery subsystem 00:34:54.035 treq: not specified, sq flow control disable supported 00:34:54.035 portid: 1 00:34:54.035 trsvcid: 4420 00:34:54.035 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:54.035 traddr: 10.0.0.1 00:34:54.035 eflags: none 00:34:54.035 sectype: none 00:34:54.035 =====Discovery Log Entry 1====== 00:34:54.035 trtype: tcp 00:34:54.035 adrfam: ipv4 00:34:54.035 subtype: nvme subsystem 00:34:54.035 treq: not specified, sq flow control disable supported 00:34:54.035 portid: 1 00:34:54.035 trsvcid: 4420 00:34:54.035 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:54.035 traddr: 10.0.0.1 00:34:54.035 eflags: none 00:34:54.035 sectype: none 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:54.035 05:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:57.322 Initializing NVMe Controllers 00:34:57.322 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:57.322 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:57.322 Initialization complete. Launching workers. 00:34:57.322 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80990, failed: 0 00:34:57.322 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80990, failed to submit 0 00:34:57.322 success 0, unsuccessful 80990, failed 0 00:34:57.322 05:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:57.322 05:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:00.612 Initializing NVMe Controllers 00:35:00.612 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:00.612 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:00.612 Initialization complete. Launching workers. 00:35:00.612 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144421, failed: 0 00:35:00.612 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27954, failed to submit 116467 00:35:00.612 success 0, unsuccessful 27954, failed 0 00:35:00.612 05:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:00.612 05:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:03.902 Initializing NVMe Controllers 00:35:03.902 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:03.902 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:03.902 Initialization complete. Launching workers. 00:35:03.902 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131722, failed: 0 00:35:03.902 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32942, failed to submit 98780 00:35:03.902 success 0, unsuccessful 32942, failed 0 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:03.902 05:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:06.575 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:06.575 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:07.143 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:07.401 00:35:07.401 real 0m17.556s 00:35:07.401 user 0m8.650s 00:35:07.401 sys 0m5.247s 00:35:07.401 05:11:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.401 05:11:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.401 ************************************ 00:35:07.401 END TEST kernel_target_abort 00:35:07.401 ************************************ 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.401 rmmod nvme_tcp 00:35:07.401 rmmod nvme_fabrics 00:35:07.401 rmmod nvme_keyring 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 893891 ']' 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 893891 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 893891 ']' 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 893891 00:35:07.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (893891) - No such process 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 893891 is not found' 00:35:07.401 Process with pid 893891 is not found 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:07.401 05:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:10.724 Waiting for block devices as requested 00:35:10.724 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:10.724 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:10.724 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:10.983 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:10.983 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:10.983 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:11.242 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:11.242 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:11.242 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:11.501 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:11.501 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:11.501 05:12:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.037 05:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:14.037 00:35:14.037 real 0m48.510s 00:35:14.037 user 1m7.715s 00:35:14.037 sys 0m16.620s 00:35:14.037 05:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.037 05:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:14.037 ************************************ 00:35:14.037 END TEST nvmf_abort_qd_sizes 00:35:14.037 ************************************ 00:35:14.037 05:12:04 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:14.037 05:12:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:14.037 05:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.037 05:12:04 -- common/autotest_common.sh@10 -- # set +x 00:35:14.037 ************************************ 00:35:14.037 START TEST keyring_file 00:35:14.037 ************************************ 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:14.037 * Looking for test storage... 00:35:14.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.037 05:12:04 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.037 --rc genhtml_branch_coverage=1 00:35:14.037 --rc genhtml_function_coverage=1 00:35:14.037 --rc genhtml_legend=1 00:35:14.037 --rc geninfo_all_blocks=1 00:35:14.037 --rc geninfo_unexecuted_blocks=1 00:35:14.037 00:35:14.037 ' 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.037 --rc genhtml_branch_coverage=1 00:35:14.037 --rc genhtml_function_coverage=1 00:35:14.037 --rc genhtml_legend=1 00:35:14.037 --rc geninfo_all_blocks=1 00:35:14.037 --rc geninfo_unexecuted_blocks=1 00:35:14.037 00:35:14.037 ' 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.037 --rc genhtml_branch_coverage=1 00:35:14.037 --rc genhtml_function_coverage=1 00:35:14.037 --rc genhtml_legend=1 00:35:14.037 --rc geninfo_all_blocks=1 00:35:14.037 --rc geninfo_unexecuted_blocks=1 00:35:14.037 00:35:14.037 ' 00:35:14.037 05:12:04 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.037 --rc genhtml_branch_coverage=1 00:35:14.037 --rc genhtml_function_coverage=1 00:35:14.037 --rc genhtml_legend=1 00:35:14.037 --rc geninfo_all_blocks=1 00:35:14.037 --rc geninfo_unexecuted_blocks=1 00:35:14.037 00:35:14.037 ' 00:35:14.037 05:12:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:14.037 05:12:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.037 05:12:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:14.037 05:12:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.038 05:12:04 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.038 05:12:04 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.038 05:12:04 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.038 05:12:04 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.038 05:12:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.038 05:12:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.038 05:12:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.038 05:12:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:14.038 05:12:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qQZqwvVRzc 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qQZqwvVRzc 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qQZqwvVRzc 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qQZqwvVRzc 00:35:14.038 05:12:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Sxl5UuRTzL 00:35:14.038 05:12:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:14.038 05:12:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:14.038 05:12:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Sxl5UuRTzL 00:35:14.038 05:12:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Sxl5UuRTzL 00:35:14.038 05:12:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Sxl5UuRTzL 00:35:14.038 05:12:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=902612 00:35:14.038 05:12:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 902612 00:35:14.038 05:12:05 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:14.038 05:12:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 902612 ']' 00:35:14.038 05:12:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.038 05:12:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.038 05:12:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.038 05:12:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.038 05:12:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.038 [2024-12-10 05:12:05.075842] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:35:14.038 [2024-12-10 05:12:05.075892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902612 ] 00:35:14.038 [2024-12-10 05:12:05.150871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.297 [2024-12-10 05:12:05.192598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.297 05:12:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.297 05:12:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:14.297 05:12:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:14.297 05:12:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.297 05:12:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.297 [2024-12-10 05:12:05.412113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.556 null0 00:35:14.556 [2024-12-10 05:12:05.444164] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:14.556 [2024-12-10 05:12:05.444473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.556 05:12:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.556 [2024-12-10 05:12:05.476244] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:14.556 request: 00:35:14.556 { 00:35:14.556 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.556 "secure_channel": false, 00:35:14.556 "listen_address": { 00:35:14.556 "trtype": "tcp", 00:35:14.556 "traddr": "127.0.0.1", 00:35:14.556 "trsvcid": "4420" 00:35:14.556 }, 00:35:14.556 "method": "nvmf_subsystem_add_listener", 00:35:14.556 "req_id": 1 00:35:14.556 } 00:35:14.556 Got JSON-RPC error response 00:35:14.556 response: 00:35:14.556 { 00:35:14.556 "code": -32602, 00:35:14.556 "message": "Invalid parameters" 00:35:14.556 } 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.556 05:12:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=902619 00:35:14.556 05:12:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 902619 /var/tmp/bperf.sock 00:35:14.556 05:12:05 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 902619 ']' 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.556 05:12:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.557 05:12:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.557 [2024-12-10 05:12:05.531755] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:35:14.557 [2024-12-10 05:12:05.531796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902619 ] 00:35:14.557 [2024-12-10 05:12:05.606910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.557 [2024-12-10 05:12:05.650198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.816 05:12:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.816 05:12:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:14.816 05:12:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:14.816 05:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:15.075 05:12:05 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Sxl5UuRTzL 00:35:15.075 05:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Sxl5UuRTzL 00:35:15.075 05:12:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:15.075 05:12:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:15.075 05:12:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.075 05:12:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.075 05:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.334 05:12:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.qQZqwvVRzc == \/\t\m\p\/\t\m\p\.\q\Q\Z\q\w\v\V\R\z\c ]] 00:35:15.334 05:12:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:15.334 05:12:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:15.334 05:12:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.334 05:12:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.334 05:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.593 05:12:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Sxl5UuRTzL == \/\t\m\p\/\t\m\p\.\S\x\l\5\U\u\R\T\z\L ]] 00:35:15.593 05:12:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:15.593 05:12:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:15.593 05:12:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.593 05:12:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.593 05:12:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.593 05:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.852 05:12:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:15.852 05:12:06 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:15.852 05:12:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:15.852 05:12:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.852 05:12:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.852 05:12:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.852 05:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.112 05:12:06 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:16.112 05:12:06 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.112 05:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.112 [2024-12-10 05:12:07.166296] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:16.112 nvme0n1 00:35:16.371 05:12:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.371 05:12:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:16.371 05:12:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:16.371 05:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.630 05:12:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:16.630 05:12:07 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.630 Running I/O for 1 seconds... 00:35:18.006 19335.00 IOPS, 75.53 MiB/s 00:35:18.006 Latency(us) 00:35:18.006 [2024-12-10T04:12:09.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.006 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:18.006 nvme0n1 : 1.00 19382.51 75.71 0.00 0.00 6592.07 2839.89 16477.62 00:35:18.006 [2024-12-10T04:12:09.143Z] =================================================================================================================== 00:35:18.006 [2024-12-10T04:12:09.143Z] Total : 19382.51 75.71 0.00 0.00 6592.07 2839.89 16477.62 00:35:18.006 { 00:35:18.006 "results": [ 00:35:18.006 { 00:35:18.006 "job": "nvme0n1", 00:35:18.006 "core_mask": "0x2", 00:35:18.006 "workload": "randrw", 00:35:18.006 "percentage": 50, 00:35:18.006 "status": "finished", 00:35:18.006 "queue_depth": 128, 00:35:18.006 "io_size": 4096, 00:35:18.006 "runtime": 1.004256, 00:35:18.006 "iops": 19382.50804575726, 00:35:18.006 "mibps": 75.71292205373929, 00:35:18.006 "io_failed": 0, 00:35:18.006 "io_timeout": 0, 00:35:18.006 "avg_latency_us": 6592.070424816215, 00:35:18.006 "min_latency_us": 2839.8933333333334, 00:35:18.006 "max_latency_us": 16477.62285714286 00:35:18.006 } 00:35:18.006 ], 00:35:18.006 "core_count": 1 00:35:18.006 } 00:35:18.006 05:12:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:18.006 05:12:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:18.006 05:12:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:18.006 05:12:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.006 05:12:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.006 05:12:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.006 05:12:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.006 05:12:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.265 05:12:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:18.265 05:12:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:18.265 05:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.265 05:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:18.265 05:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.265 05:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:18.265 05:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.265 05:12:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:18.265 05:12:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.265 05:12:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:18.265 05:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:18.524 [2024-12-10 05:12:09.536733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:18.524 [2024-12-10 05:12:09.537628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f26410 (107): Transport endpoint is not connected 00:35:18.524 [2024-12-10 05:12:09.538623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f26410 (9): Bad file descriptor 00:35:18.524 [2024-12-10 05:12:09.539624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:18.524 [2024-12-10 05:12:09.539634] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:18.524 [2024-12-10 05:12:09.539642] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:18.524 [2024-12-10 05:12:09.539650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:18.524 request: 00:35:18.524 { 00:35:18.524 "name": "nvme0", 00:35:18.524 "trtype": "tcp", 00:35:18.524 "traddr": "127.0.0.1", 00:35:18.524 "adrfam": "ipv4", 00:35:18.524 "trsvcid": "4420", 00:35:18.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.524 "prchk_reftag": false, 00:35:18.524 "prchk_guard": false, 00:35:18.524 "hdgst": false, 00:35:18.524 "ddgst": false, 00:35:18.524 "psk": "key1", 00:35:18.524 "allow_unrecognized_csi": false, 00:35:18.524 "method": "bdev_nvme_attach_controller", 00:35:18.524 "req_id": 1 00:35:18.524 } 00:35:18.524 Got JSON-RPC error response 00:35:18.524 response: 00:35:18.524 { 00:35:18.524 "code": -5, 00:35:18.524 "message": "Input/output error" 00:35:18.524 } 00:35:18.524 05:12:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:18.524 05:12:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.524 05:12:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.524 05:12:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.524 05:12:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:18.524 05:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.524 05:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.524 05:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.524 05:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.524 05:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.782 05:12:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:18.782 05:12:09 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:18.782 05:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:18.782 05:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.782 05:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.782 05:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.782 05:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.040 05:12:09 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:19.040 05:12:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:19.040 05:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:19.298 05:12:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:19.298 05:12:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:19.298 05:12:10 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:19.298 05:12:10 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:19.298 05:12:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.556 05:12:10 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:19.556 05:12:10 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.qQZqwvVRzc 00:35:19.556 05:12:10 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.556 05:12:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:19.556 05:12:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:19.815 [2024-12-10 05:12:10.755890] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qQZqwvVRzc': 0100660 00:35:19.815 [2024-12-10 05:12:10.755920] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:19.815 request: 00:35:19.815 { 00:35:19.815 "name": "key0", 00:35:19.815 "path": "/tmp/tmp.qQZqwvVRzc", 00:35:19.815 "method": "keyring_file_add_key", 00:35:19.815 "req_id": 1 00:35:19.815 } 00:35:19.815 Got JSON-RPC error response 00:35:19.815 response: 00:35:19.815 { 00:35:19.815 "code": -1, 00:35:19.815 "message": "Operation not permitted" 00:35:19.815 } 00:35:19.815 05:12:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:19.815 05:12:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.815 05:12:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.815 05:12:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.815 05:12:10 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.qQZqwvVRzc 00:35:19.815 05:12:10 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:19.815 05:12:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qQZqwvVRzc 00:35:20.074 05:12:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.qQZqwvVRzc 00:35:20.074 05:12:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:20.074 05:12:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:20.074 05:12:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:20.074 05:12:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:20.074 05:12:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:20.074 05:12:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.074 05:12:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:20.074 05:12:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.074 05:12:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.074 05:12:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.334 [2024-12-10 05:12:11.345452] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qQZqwvVRzc': No such file or directory 00:35:20.334 [2024-12-10 05:12:11.345478] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:20.334 [2024-12-10 05:12:11.345493] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:20.334 [2024-12-10 05:12:11.345500] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:20.334 [2024-12-10 05:12:11.345506] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:20.334 [2024-12-10 05:12:11.345512] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:20.334 request: 00:35:20.334 { 00:35:20.334 "name": "nvme0", 00:35:20.334 "trtype": "tcp", 00:35:20.334 "traddr": "127.0.0.1", 00:35:20.334 "adrfam": "ipv4", 00:35:20.334 "trsvcid": "4420", 00:35:20.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.334 "prchk_reftag": false, 00:35:20.334 "prchk_guard": false, 00:35:20.334 "hdgst": false, 00:35:20.334 "ddgst": false, 00:35:20.334 "psk": "key0", 00:35:20.334 "allow_unrecognized_csi": false, 00:35:20.334 "method": "bdev_nvme_attach_controller", 00:35:20.334 "req_id": 1 00:35:20.334 } 00:35:20.334 Got JSON-RPC error response 00:35:20.334 response: 00:35:20.334 { 00:35:20.334 "code": -19, 00:35:20.334 "message": "No such device" 00:35:20.334 } 00:35:20.334 05:12:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:20.334 05:12:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:20.334 05:12:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:20.334 05:12:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:20.334 05:12:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:20.334 05:12:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:20.593 05:12:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.acJAWuv7bW 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:20.593 05:12:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:20.593 05:12:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.593 05:12:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:20.593 05:12:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:20.593 05:12:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:20.593 05:12:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.acJAWuv7bW 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.acJAWuv7bW 00:35:20.593 05:12:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.acJAWuv7bW 00:35:20.593 05:12:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.acJAWuv7bW 00:35:20.593 05:12:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.acJAWuv7bW 00:35:20.851 05:12:11 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.851 05:12:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.110 nvme0n1 00:35:21.110 05:12:12 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:21.110 05:12:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:21.110 05:12:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.110 05:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.110 05:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.110 05:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.370 05:12:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:21.370 05:12:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:21.370 05:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:21.370 05:12:12 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:21.370 05:12:12 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:21.370 05:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.370 05:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.370 05:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.629 05:12:12 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:21.629 05:12:12 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:21.629 05:12:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:21.629 05:12:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.629 05:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.629 05:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.629 05:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.888 05:12:12 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:21.888 05:12:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:21.888 05:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:22.147 05:12:13 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:22.147 05:12:13 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:22.147 05:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.147 05:12:13 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:22.147 05:12:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.acJAWuv7bW 00:35:22.147 05:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.acJAWuv7bW 00:35:22.405 05:12:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Sxl5UuRTzL 00:35:22.405 05:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Sxl5UuRTzL 00:35:22.664 05:12:13 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.664 05:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.923 nvme0n1 00:35:22.923 05:12:13 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:22.923 05:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:23.183 05:12:14 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:23.183 "subsystems": [ 00:35:23.183 { 00:35:23.183 "subsystem": "keyring", 00:35:23.183 "config": [ 00:35:23.183 { 00:35:23.183 "method": "keyring_file_add_key", 00:35:23.183 "params": { 00:35:23.183 "name": "key0", 00:35:23.183 "path": "/tmp/tmp.acJAWuv7bW" 00:35:23.183 } 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "method": "keyring_file_add_key", 00:35:23.183 "params": { 00:35:23.183 "name": "key1", 00:35:23.183 "path": "/tmp/tmp.Sxl5UuRTzL" 00:35:23.183 } 00:35:23.183 } 00:35:23.183 ] 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "subsystem": "iobuf", 00:35:23.183 "config": [ 00:35:23.183 { 00:35:23.183 "method": "iobuf_set_options", 00:35:23.183 "params": { 00:35:23.183 "small_pool_count": 8192, 00:35:23.183 "large_pool_count": 1024, 00:35:23.183 "small_bufsize": 8192, 00:35:23.183 "large_bufsize": 135168, 00:35:23.183 "enable_numa": false 00:35:23.183 } 00:35:23.183 } 00:35:23.183 ] 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "subsystem": "sock", 00:35:23.183 "config": [ 00:35:23.183 { 00:35:23.183 "method": "sock_set_default_impl", 00:35:23.183 "params": { 00:35:23.183 "impl_name": "posix" 00:35:23.183 } 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "method": "sock_impl_set_options", 00:35:23.183 "params": { 00:35:23.183 "impl_name": "ssl", 00:35:23.183 "recv_buf_size": 4096, 00:35:23.183 "send_buf_size": 4096, 00:35:23.183 "enable_recv_pipe": true, 00:35:23.183 "enable_quickack": false, 00:35:23.183 "enable_placement_id": 0, 00:35:23.183 "enable_zerocopy_send_server": true, 00:35:23.183 "enable_zerocopy_send_client": false, 00:35:23.183 "zerocopy_threshold": 0, 00:35:23.183 "tls_version": 0, 00:35:23.183 "enable_ktls": false 00:35:23.183 } 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "method": "sock_impl_set_options", 00:35:23.183 "params": { 00:35:23.183 "impl_name": "posix", 00:35:23.183 "recv_buf_size": 2097152, 00:35:23.183 "send_buf_size": 2097152, 00:35:23.183 "enable_recv_pipe": true, 00:35:23.183 "enable_quickack": false, 00:35:23.183 "enable_placement_id": 0, 00:35:23.183 "enable_zerocopy_send_server": true, 00:35:23.183 "enable_zerocopy_send_client": false, 00:35:23.183 "zerocopy_threshold": 0, 00:35:23.183 "tls_version": 0, 00:35:23.183 "enable_ktls": false 00:35:23.183 } 00:35:23.183 } 00:35:23.183 ] 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "subsystem": "vmd", 00:35:23.183 "config": [] 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "subsystem": "accel", 00:35:23.183 "config": [ 00:35:23.183 { 00:35:23.183 "method": "accel_set_options", 00:35:23.183 "params": { 00:35:23.183 "small_cache_size": 128, 00:35:23.183 "large_cache_size": 16, 00:35:23.183 "task_count": 2048, 00:35:23.183 "sequence_count": 2048, 00:35:23.183 "buf_count": 2048 00:35:23.183 } 00:35:23.183 } 00:35:23.183 ] 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "subsystem": "bdev", 00:35:23.183 "config": [ 00:35:23.183 { 00:35:23.183 "method": "bdev_set_options", 00:35:23.183 "params": { 00:35:23.183 "bdev_io_pool_size": 65535, 00:35:23.183 "bdev_io_cache_size": 256, 00:35:23.183 "bdev_auto_examine": true, 00:35:23.183 "iobuf_small_cache_size": 128, 00:35:23.183 "iobuf_large_cache_size": 16 00:35:23.183 } 00:35:23.183 }, 00:35:23.183 { 00:35:23.183 "method": "bdev_raid_set_options", 00:35:23.183 "params": { 00:35:23.183 "process_window_size_kb": 1024, 00:35:23.183 "process_max_bandwidth_mb_sec": 0 00:35:23.183 } 00:35:23.184 }, 00:35:23.184 { 00:35:23.184 "method": "bdev_iscsi_set_options", 00:35:23.184 "params": { 00:35:23.184 "timeout_sec": 30 00:35:23.184 } 00:35:23.184 }, 00:35:23.184 { 00:35:23.184 "method": "bdev_nvme_set_options", 00:35:23.184 "params": { 00:35:23.184 "action_on_timeout": "none", 00:35:23.184 "timeout_us": 0, 00:35:23.184 "timeout_admin_us": 0, 00:35:23.184 "keep_alive_timeout_ms": 10000, 00:35:23.184 "arbitration_burst": 0, 00:35:23.184 "low_priority_weight": 0, 00:35:23.184 "medium_priority_weight": 0, 00:35:23.184 "high_priority_weight": 0, 00:35:23.184 "nvme_adminq_poll_period_us": 10000, 00:35:23.184 "nvme_ioq_poll_period_us": 0, 00:35:23.184 "io_queue_requests": 512, 00:35:23.184 "delay_cmd_submit": true, 00:35:23.184 "transport_retry_count": 4, 00:35:23.184 "bdev_retry_count": 3, 00:35:23.184 "transport_ack_timeout": 0, 00:35:23.184 "ctrlr_loss_timeout_sec": 0, 00:35:23.184 "reconnect_delay_sec": 0, 00:35:23.184 "fast_io_fail_timeout_sec": 0, 00:35:23.184 "disable_auto_failback": false, 00:35:23.184 "generate_uuids": false, 00:35:23.184 "transport_tos": 0, 00:35:23.184 "nvme_error_stat": false, 00:35:23.184 "rdma_srq_size": 0, 00:35:23.184 "io_path_stat": false, 00:35:23.184 "allow_accel_sequence": false, 00:35:23.184 "rdma_max_cq_size": 0, 00:35:23.184 "rdma_cm_event_timeout_ms": 0, 00:35:23.184 "dhchap_digests": [ 00:35:23.184 "sha256", 00:35:23.184 "sha384", 00:35:23.184 "sha512" 00:35:23.184 ], 00:35:23.184 "dhchap_dhgroups": [ 00:35:23.184 "null", 00:35:23.184 "ffdhe2048", 00:35:23.184 "ffdhe3072", 00:35:23.184 "ffdhe4096", 00:35:23.184 "ffdhe6144", 00:35:23.184 "ffdhe8192" 00:35:23.184 ] 00:35:23.184 } 00:35:23.184 }, 00:35:23.184 { 00:35:23.184 "method": "bdev_nvme_attach_controller", 00:35:23.184 "params": { 00:35:23.184 "name": "nvme0", 00:35:23.184 "trtype": "TCP", 00:35:23.184 "adrfam": "IPv4", 00:35:23.184 "traddr": "127.0.0.1", 00:35:23.184 "trsvcid": "4420", 00:35:23.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.184 "prchk_reftag": false, 00:35:23.184 "prchk_guard": false, 00:35:23.184 "ctrlr_loss_timeout_sec": 0, 00:35:23.184 "reconnect_delay_sec": 0, 00:35:23.184 "fast_io_fail_timeout_sec": 0, 00:35:23.184 "psk": "key0", 00:35:23.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.184 "hdgst": false, 00:35:23.184 "ddgst": false, 00:35:23.184 "multipath": "multipath" 00:35:23.184 } 00:35:23.184 }, 00:35:23.184 { 00:35:23.184 "method": "bdev_nvme_set_hotplug", 00:35:23.184 "params": { 00:35:23.184 "period_us": 100000, 00:35:23.184 "enable": false 00:35:23.184 } 00:35:23.184 }, 00:35:23.184 { 00:35:23.184 "method": "bdev_wait_for_examine" 00:35:23.184 } 00:35:23.184 ] 00:35:23.184 }, 00:35:23.184 { 00:35:23.184 "subsystem": "nbd", 00:35:23.184 "config": [] 00:35:23.184 } 00:35:23.184 ] 00:35:23.184 }' 00:35:23.184 05:12:14 keyring_file -- keyring/file.sh@115 -- # killprocess 902619 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 902619 ']' 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@958 -- # kill -0 902619 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902619 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902619' 00:35:23.184 killing process with pid 902619 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@973 -- # kill 902619 00:35:23.184 Received shutdown signal, test time was about 1.000000 seconds 00:35:23.184 00:35:23.184 Latency(us) 00:35:23.184 [2024-12-10T04:12:14.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.184 [2024-12-10T04:12:14.321Z] =================================================================================================================== 00:35:23.184 [2024-12-10T04:12:14.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.184 05:12:14 keyring_file -- common/autotest_common.sh@978 -- # wait 902619 00:35:23.444 05:12:14 keyring_file -- keyring/file.sh@118 -- # bperfpid=904577 00:35:23.444 05:12:14 keyring_file -- keyring/file.sh@120 -- # waitforlisten 904577 /var/tmp/bperf.sock 00:35:23.444 05:12:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 904577 ']' 00:35:23.444 05:12:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:23.444 05:12:14 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:23.444 05:12:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.444 05:12:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:23.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:23.444 05:12:14 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:23.444 "subsystems": [ 00:35:23.444 { 00:35:23.444 "subsystem": "keyring", 00:35:23.444 "config": [ 00:35:23.444 { 00:35:23.444 "method": "keyring_file_add_key", 00:35:23.444 "params": { 00:35:23.444 "name": "key0", 00:35:23.444 "path": "/tmp/tmp.acJAWuv7bW" 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "keyring_file_add_key", 00:35:23.444 "params": { 00:35:23.444 "name": "key1", 00:35:23.444 "path": "/tmp/tmp.Sxl5UuRTzL" 00:35:23.444 } 00:35:23.444 } 00:35:23.444 ] 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "subsystem": "iobuf", 00:35:23.444 "config": [ 00:35:23.444 { 00:35:23.444 "method": "iobuf_set_options", 00:35:23.444 "params": { 00:35:23.444 "small_pool_count": 8192, 00:35:23.444 "large_pool_count": 1024, 00:35:23.444 "small_bufsize": 8192, 00:35:23.444 "large_bufsize": 135168, 00:35:23.444 "enable_numa": false 00:35:23.444 } 00:35:23.444 } 00:35:23.444 ] 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "subsystem": "sock", 00:35:23.444 "config": [ 00:35:23.444 { 00:35:23.444 "method": "sock_set_default_impl", 00:35:23.444 "params": { 00:35:23.444 "impl_name": "posix" 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "sock_impl_set_options", 00:35:23.444 "params": { 00:35:23.444 "impl_name": "ssl", 00:35:23.444 "recv_buf_size": 4096, 00:35:23.444 "send_buf_size": 4096, 00:35:23.444 "enable_recv_pipe": true, 00:35:23.444 "enable_quickack": false, 00:35:23.444 "enable_placement_id": 0, 00:35:23.444 "enable_zerocopy_send_server": true, 00:35:23.444 "enable_zerocopy_send_client": false, 00:35:23.444 "zerocopy_threshold": 0, 00:35:23.444 "tls_version": 0, 00:35:23.444 "enable_ktls": false 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "sock_impl_set_options", 00:35:23.444 "params": { 00:35:23.444 "impl_name": "posix", 00:35:23.444 "recv_buf_size": 2097152, 00:35:23.444 "send_buf_size": 2097152, 00:35:23.444 "enable_recv_pipe": true, 00:35:23.444 "enable_quickack": false, 00:35:23.444 "enable_placement_id": 0, 00:35:23.444 "enable_zerocopy_send_server": true, 00:35:23.444 "enable_zerocopy_send_client": false, 00:35:23.444 "zerocopy_threshold": 0, 00:35:23.444 "tls_version": 0, 00:35:23.444 "enable_ktls": false 00:35:23.444 } 00:35:23.444 } 00:35:23.444 ] 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "subsystem": "vmd", 00:35:23.444 "config": [] 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "subsystem": "accel", 00:35:23.444 "config": [ 00:35:23.444 { 00:35:23.444 "method": "accel_set_options", 00:35:23.444 "params": { 00:35:23.444 "small_cache_size": 128, 00:35:23.444 "large_cache_size": 16, 00:35:23.444 "task_count": 2048, 00:35:23.444 "sequence_count": 2048, 00:35:23.444 "buf_count": 2048 00:35:23.444 } 00:35:23.444 } 00:35:23.444 ] 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "subsystem": "bdev", 00:35:23.444 "config": [ 00:35:23.444 { 00:35:23.444 "method": "bdev_set_options", 00:35:23.444 "params": { 00:35:23.444 "bdev_io_pool_size": 65535, 00:35:23.444 "bdev_io_cache_size": 256, 00:35:23.444 "bdev_auto_examine": true, 00:35:23.444 "iobuf_small_cache_size": 128, 00:35:23.444 "iobuf_large_cache_size": 16 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "bdev_raid_set_options", 00:35:23.444 "params": { 00:35:23.444 "process_window_size_kb": 1024, 00:35:23.444 "process_max_bandwidth_mb_sec": 0 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "bdev_iscsi_set_options", 00:35:23.444 "params": { 00:35:23.444 "timeout_sec": 30 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "bdev_nvme_set_options", 00:35:23.444 "params": { 00:35:23.444 "action_on_timeout": "none", 00:35:23.444 "timeout_us": 0, 00:35:23.444 "timeout_admin_us": 0, 00:35:23.444 "keep_alive_timeout_ms": 10000, 00:35:23.444 "arbitration_burst": 0, 00:35:23.444 "low_priority_weight": 0, 00:35:23.444 "medium_priority_weight": 0, 00:35:23.444 "high_priority_weight": 0, 00:35:23.444 "nvme_adminq_poll_period_us": 10000, 00:35:23.444 "nvme_ioq_poll_period_us": 0, 00:35:23.444 "io_queue_requests": 512, 00:35:23.444 "delay_cmd_submit": true, 00:35:23.444 "transport_retry_count": 4, 00:35:23.444 "bdev_retry_count": 3, 00:35:23.444 "transport_ack_timeout": 0, 00:35:23.444 "ctrlr_loss_timeout_sec": 0, 00:35:23.444 "reconnect_delay_sec": 0, 00:35:23.444 "fast_io_fail_timeout_sec": 0, 00:35:23.444 "disable_auto_failback": false, 00:35:23.444 "generate_uuids": false, 00:35:23.444 "transport_tos": 0, 00:35:23.444 "nvme_error_stat": false, 00:35:23.444 "rdma_srq_size": 0, 00:35:23.444 "io_path_stat": false, 00:35:23.444 "allow_accel_sequence": false, 00:35:23.444 "rdma_max_cq_size": 0, 00:35:23.444 "rdma_cm_event_timeout_ms": 0, 00:35:23.444 "dhchap_digests": [ 00:35:23.444 "sha256", 00:35:23.444 "sha384", 00:35:23.444 "sha512" 00:35:23.444 ], 00:35:23.444 "dhchap_dhgroups": [ 00:35:23.444 "null", 00:35:23.444 "ffdhe2048", 00:35:23.444 "ffdhe3072", 00:35:23.444 "ffdhe4096", 00:35:23.444 "ffdhe6144", 00:35:23.444 "ffdhe8192" 00:35:23.444 ] 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "bdev_nvme_attach_controller", 00:35:23.444 "params": { 00:35:23.444 "name": "nvme0", 00:35:23.444 "trtype": "TCP", 00:35:23.444 "adrfam": "IPv4", 00:35:23.444 "traddr": "127.0.0.1", 00:35:23.444 "trsvcid": "4420", 00:35:23.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.444 "prchk_reftag": false, 00:35:23.444 "prchk_guard": false, 00:35:23.444 "ctrlr_loss_timeout_sec": 0, 00:35:23.444 "reconnect_delay_sec": 0, 00:35:23.444 "fast_io_fail_timeout_sec": 0, 00:35:23.444 "psk": "key0", 00:35:23.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.444 "hdgst": false, 00:35:23.444 "ddgst": false, 00:35:23.444 "multipath": "multipath" 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "bdev_nvme_set_hotplug", 00:35:23.444 "params": { 00:35:23.444 "period_us": 100000, 00:35:23.444 "enable": false 00:35:23.444 } 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "method": "bdev_wait_for_examine" 00:35:23.444 } 00:35:23.444 ] 00:35:23.444 }, 00:35:23.444 { 00:35:23.444 "subsystem": "nbd", 00:35:23.444 "config": [] 00:35:23.444 } 00:35:23.444 ] 00:35:23.444 }' 00:35:23.444 05:12:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.445 05:12:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:23.445 [2024-12-10 05:12:14.402358] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:35:23.445 [2024-12-10 05:12:14.402405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904577 ] 00:35:23.445 [2024-12-10 05:12:14.476270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.445 [2024-12-10 05:12:14.513933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.703 [2024-12-10 05:12:14.675699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:24.270 05:12:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.270 05:12:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:24.270 05:12:15 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:24.270 05:12:15 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:24.270 05:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.528 05:12:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:24.528 05:12:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.528 05:12:15 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:24.528 05:12:15 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:24.528 05:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.786 05:12:15 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:24.786 05:12:15 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:24.786 05:12:15 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:24.786 05:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:25.045 05:12:16 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:25.045 05:12:16 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:25.045 05:12:16 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.acJAWuv7bW /tmp/tmp.Sxl5UuRTzL 00:35:25.045 05:12:16 keyring_file -- keyring/file.sh@20 -- # killprocess 904577 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 904577 ']' 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 904577 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904577 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904577' 00:35:25.045 killing process with pid 904577 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@973 -- # kill 904577 00:35:25.045 Received shutdown signal, test time was about 1.000000 seconds 00:35:25.045 00:35:25.045 Latency(us) 00:35:25.045 [2024-12-10T04:12:16.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.045 [2024-12-10T04:12:16.182Z] =================================================================================================================== 00:35:25.045 [2024-12-10T04:12:16.182Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:25.045 05:12:16 keyring_file -- common/autotest_common.sh@978 -- # wait 904577 00:35:25.304 05:12:16 keyring_file -- keyring/file.sh@21 -- # killprocess 902612 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 902612 ']' 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 902612 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902612 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902612' 00:35:25.304 killing process with pid 902612 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@973 -- # kill 902612 00:35:25.304 05:12:16 keyring_file -- common/autotest_common.sh@978 -- # wait 902612 00:35:25.563 00:35:25.563 real 0m11.907s 00:35:25.563 user 0m29.647s 00:35:25.563 sys 0m2.671s 00:35:25.563 05:12:16 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.563 05:12:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:25.563 ************************************ 00:35:25.563 END TEST keyring_file 00:35:25.563 ************************************ 00:35:25.563 05:12:16 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:25.563 05:12:16 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:25.563 05:12:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:25.563 05:12:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.563 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:35:25.563 ************************************ 00:35:25.563 START TEST keyring_linux 00:35:25.563 ************************************ 00:35:25.563 05:12:16 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:25.563 Joined session keyring: 224868677 00:35:25.822 * Looking for test storage... 00:35:25.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.822 05:12:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.822 --rc genhtml_branch_coverage=1 00:35:25.822 --rc genhtml_function_coverage=1 00:35:25.822 --rc genhtml_legend=1 00:35:25.822 --rc geninfo_all_blocks=1 00:35:25.822 --rc geninfo_unexecuted_blocks=1 00:35:25.822 00:35:25.822 ' 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.822 --rc genhtml_branch_coverage=1 00:35:25.822 --rc genhtml_function_coverage=1 00:35:25.822 --rc genhtml_legend=1 00:35:25.822 --rc geninfo_all_blocks=1 00:35:25.822 --rc geninfo_unexecuted_blocks=1 00:35:25.822 00:35:25.822 ' 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.822 --rc genhtml_branch_coverage=1 00:35:25.822 --rc genhtml_function_coverage=1 00:35:25.822 --rc genhtml_legend=1 00:35:25.822 --rc geninfo_all_blocks=1 00:35:25.822 --rc geninfo_unexecuted_blocks=1 00:35:25.822 00:35:25.822 ' 00:35:25.822 05:12:16 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.823 --rc genhtml_branch_coverage=1 00:35:25.823 --rc genhtml_function_coverage=1 00:35:25.823 --rc genhtml_legend=1 00:35:25.823 --rc geninfo_all_blocks=1 00:35:25.823 --rc geninfo_unexecuted_blocks=1 00:35:25.823 00:35:25.823 ' 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.823 05:12:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.823 05:12:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.823 05:12:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.823 05:12:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.823 05:12:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.823 05:12:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.823 05:12:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.823 05:12:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:25.823 05:12:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:25.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:25.823 /tmp/:spdk-test:key0 00:35:25.823 05:12:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:25.823 05:12:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:25.823 05:12:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:26.082 05:12:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:26.082 05:12:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:26.082 /tmp/:spdk-test:key1 00:35:26.082 05:12:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=905027 00:35:26.082 05:12:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:26.082 05:12:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 905027 00:35:26.082 05:12:16 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 905027 ']' 00:35:26.082 05:12:16 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.082 05:12:16 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.082 05:12:16 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.082 05:12:16 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.082 05:12:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:26.082 [2024-12-10 05:12:17.036280] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:35:26.082 [2024-12-10 05:12:17.036330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905027 ] 00:35:26.082 [2024-12-10 05:12:17.112351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.082 [2024-12-10 05:12:17.154011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:27.018 05:12:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:27.018 [2024-12-10 05:12:17.858483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.018 null0 00:35:27.018 [2024-12-10 05:12:17.890537] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:27.018 [2024-12-10 05:12:17.890831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.018 05:12:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:27.018 646912792 00:35:27.018 05:12:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:27.018 615827834 00:35:27.018 05:12:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=905255 00:35:27.018 05:12:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 905255 /var/tmp/bperf.sock 00:35:27.018 05:12:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 905255 ']' 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.018 05:12:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:27.018 [2024-12-10 05:12:17.960845] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:35:27.018 [2024-12-10 05:12:17.960888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905255 ] 00:35:27.018 [2024-12-10 05:12:18.033564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.018 [2024-12-10 05:12:18.074056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.018 05:12:18 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.018 05:12:18 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:27.018 05:12:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:27.018 05:12:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:27.277 05:12:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:27.277 05:12:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:27.536 05:12:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:27.536 05:12:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:27.795 [2024-12-10 05:12:18.715428] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:27.795 nvme0n1 00:35:27.795 05:12:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:27.795 05:12:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:27.795 05:12:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:27.795 05:12:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:27.795 05:12:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:27.795 05:12:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:28.054 05:12:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:28.054 05:12:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:28.054 05:12:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:28.054 05:12:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:28.054 05:12:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:28.054 05:12:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:28.054 05:12:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@25 -- # sn=646912792 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 646912792 == \6\4\6\9\1\2\7\9\2 ]] 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 646912792 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:28.313 05:12:19 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:28.313 Running I/O for 1 seconds... 00:35:29.249 21476.00 IOPS, 83.89 MiB/s 00:35:29.249 Latency(us) 00:35:29.249 [2024-12-10T04:12:20.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.249 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:29.249 nvme0n1 : 1.01 21473.76 83.88 0.00 0.00 5940.82 4868.39 10548.18 00:35:29.249 [2024-12-10T04:12:20.386Z] =================================================================================================================== 00:35:29.249 [2024-12-10T04:12:20.386Z] Total : 21473.76 83.88 0.00 0.00 5940.82 4868.39 10548.18 00:35:29.249 { 00:35:29.250 "results": [ 00:35:29.250 { 00:35:29.250 "job": "nvme0n1", 00:35:29.250 "core_mask": "0x2", 00:35:29.250 "workload": "randread", 00:35:29.250 "status": "finished", 00:35:29.250 "queue_depth": 128, 00:35:29.250 "io_size": 4096, 00:35:29.250 "runtime": 1.006065, 00:35:29.250 "iops": 21473.761635679602, 00:35:29.250 "mibps": 83.88188138937345, 00:35:29.250 "io_failed": 0, 00:35:29.250 "io_timeout": 0, 00:35:29.250 "avg_latency_us": 5940.819678895443, 00:35:29.250 "min_latency_us": 4868.388571428572, 00:35:29.250 "max_latency_us": 10548.175238095238 00:35:29.250 } 00:35:29.250 ], 00:35:29.250 "core_count": 1 00:35:29.250 } 00:35:29.250 05:12:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:29.250 05:12:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:29.509 05:12:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:29.509 05:12:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:29.509 05:12:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:29.509 05:12:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:29.509 05:12:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:29.509 05:12:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:29.768 05:12:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:29.768 05:12:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:29.768 05:12:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:29.768 05:12:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.768 05:12:20 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:29.768 05:12:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:29.768 [2024-12-10 05:12:20.899449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:29.768 [2024-12-10 05:12:20.899921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18461a0 (107): Transport endpoint is not connected 00:35:30.027 [2024-12-10 05:12:20.900917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18461a0 (9): Bad file descriptor 00:35:30.027 [2024-12-10 05:12:20.901918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:30.027 [2024-12-10 05:12:20.901927] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:30.027 [2024-12-10 05:12:20.901934] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:30.027 [2024-12-10 05:12:20.901942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:30.027 request: 00:35:30.027 { 00:35:30.027 "name": "nvme0", 00:35:30.027 "trtype": "tcp", 00:35:30.027 "traddr": "127.0.0.1", 00:35:30.027 "adrfam": "ipv4", 00:35:30.027 "trsvcid": "4420", 00:35:30.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.027 "prchk_reftag": false, 00:35:30.027 "prchk_guard": false, 00:35:30.027 "hdgst": false, 00:35:30.027 "ddgst": false, 00:35:30.027 "psk": ":spdk-test:key1", 00:35:30.027 "allow_unrecognized_csi": false, 00:35:30.027 "method": "bdev_nvme_attach_controller", 00:35:30.027 "req_id": 1 00:35:30.027 } 00:35:30.027 Got JSON-RPC error response 00:35:30.027 response: 00:35:30.027 { 00:35:30.027 "code": -5, 00:35:30.027 "message": "Input/output error" 00:35:30.027 } 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@33 -- # sn=646912792 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 646912792 00:35:30.027 1 links removed 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@33 -- # sn=615827834 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 615827834 00:35:30.027 1 links removed 00:35:30.027 05:12:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 905255 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 905255 ']' 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 905255 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.027 05:12:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905255 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905255' 00:35:30.027 killing process with pid 905255 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 905255 00:35:30.027 Received shutdown signal, test time was about 1.000000 seconds 00:35:30.027 00:35:30.027 Latency(us) 00:35:30.027 [2024-12-10T04:12:21.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.027 [2024-12-10T04:12:21.164Z] =================================================================================================================== 00:35:30.027 [2024-12-10T04:12:21.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 905255 00:35:30.027 05:12:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 905027 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 905027 ']' 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 905027 00:35:30.027 05:12:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:30.028 05:12:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.028 05:12:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905027 00:35:30.286 05:12:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.286 05:12:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.286 05:12:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905027' 00:35:30.286 killing process with pid 905027 00:35:30.287 05:12:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 905027 00:35:30.287 05:12:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 905027 00:35:30.546 00:35:30.546 real 0m4.824s 00:35:30.546 user 0m8.791s 00:35:30.546 sys 0m1.464s 00:35:30.546 05:12:21 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.546 05:12:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:30.546 ************************************ 00:35:30.546 END TEST keyring_linux 00:35:30.546 ************************************ 00:35:30.546 05:12:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:30.546 05:12:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:30.546 05:12:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:30.546 05:12:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:30.546 05:12:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:30.546 05:12:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:30.546 05:12:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:30.546 05:12:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.546 05:12:21 -- common/autotest_common.sh@10 -- # set +x 00:35:30.546 05:12:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:30.546 05:12:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:30.546 05:12:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:30.546 05:12:21 -- common/autotest_common.sh@10 -- # set +x 00:35:35.816 INFO: APP EXITING 00:35:35.816 INFO: killing all VMs 00:35:35.816 INFO: killing vhost app 00:35:35.816 INFO: EXIT DONE 00:35:39.105 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:39.105 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:39.105 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:41.640 Cleaning 00:35:41.640 Removing: /var/run/dpdk/spdk0/config 00:35:41.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:41.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:41.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:41.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:41.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:41.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:41.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:41.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:41.641 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:41.900 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:41.900 Removing: /var/run/dpdk/spdk1/config 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:41.900 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:41.900 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:41.900 Removing: /var/run/dpdk/spdk2/config 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:41.900 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:41.900 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:41.900 Removing: /var/run/dpdk/spdk3/config 00:35:41.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:41.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:41.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:41.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:41.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:41.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:41.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:41.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:41.901 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:41.901 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:41.901 Removing: /var/run/dpdk/spdk4/config 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:41.901 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:41.901 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:41.901 Removing: /dev/shm/bdev_svc_trace.1 00:35:41.901 Removing: /dev/shm/nvmf_trace.0 00:35:41.901 Removing: /dev/shm/spdk_tgt_trace.pid430760 00:35:41.901 Removing: /var/run/dpdk/spdk0 00:35:41.901 Removing: /var/run/dpdk/spdk1 00:35:41.901 Removing: /var/run/dpdk/spdk2 00:35:41.901 Removing: /var/run/dpdk/spdk3 00:35:41.901 Removing: /var/run/dpdk/spdk4 00:35:41.901 Removing: /var/run/dpdk/spdk_pid428675 00:35:41.901 Removing: /var/run/dpdk/spdk_pid429700 00:35:41.901 Removing: /var/run/dpdk/spdk_pid430760 00:35:41.901 Removing: /var/run/dpdk/spdk_pid431383 00:35:41.901 Removing: /var/run/dpdk/spdk_pid432305 00:35:41.901 Removing: /var/run/dpdk/spdk_pid432531 00:35:41.901 Removing: /var/run/dpdk/spdk_pid433482 00:35:41.901 Removing: /var/run/dpdk/spdk_pid433493 00:35:42.160 Removing: /var/run/dpdk/spdk_pid433854 00:35:42.160 Removing: /var/run/dpdk/spdk_pid435338 00:35:42.160 Removing: /var/run/dpdk/spdk_pid436790 00:35:42.160 Removing: /var/run/dpdk/spdk_pid437078 00:35:42.160 Removing: /var/run/dpdk/spdk_pid437359 00:35:42.160 Removing: /var/run/dpdk/spdk_pid437657 00:35:42.160 Removing: /var/run/dpdk/spdk_pid437843 00:35:42.160 Removing: /var/run/dpdk/spdk_pid438029 00:35:42.160 Removing: /var/run/dpdk/spdk_pid438228 00:35:42.160 Removing: /var/run/dpdk/spdk_pid438543 00:35:42.160 Removing: /var/run/dpdk/spdk_pid439279 00:35:42.160 Removing: /var/run/dpdk/spdk_pid442366 00:35:42.160 Removing: /var/run/dpdk/spdk_pid442624 00:35:42.160 Removing: /var/run/dpdk/spdk_pid442872 00:35:42.160 Removing: /var/run/dpdk/spdk_pid442878 00:35:42.160 Removing: /var/run/dpdk/spdk_pid443354 00:35:42.160 Removing: /var/run/dpdk/spdk_pid443363 00:35:42.160 Removing: /var/run/dpdk/spdk_pid443843 00:35:42.160 Removing: /var/run/dpdk/spdk_pid443850 00:35:42.160 Removing: /var/run/dpdk/spdk_pid444108 00:35:42.160 Removing: /var/run/dpdk/spdk_pid444115 00:35:42.160 Removing: /var/run/dpdk/spdk_pid444366 00:35:42.160 Removing: /var/run/dpdk/spdk_pid444446 00:35:42.160 Removing: /var/run/dpdk/spdk_pid444929 00:35:42.160 Removing: /var/run/dpdk/spdk_pid445170 00:35:42.160 Removing: /var/run/dpdk/spdk_pid445469 00:35:42.160 Removing: /var/run/dpdk/spdk_pid449577 00:35:42.160 Removing: /var/run/dpdk/spdk_pid454023 00:35:42.160 Removing: /var/run/dpdk/spdk_pid464060 00:35:42.160 Removing: /var/run/dpdk/spdk_pid464735 00:35:42.160 Removing: /var/run/dpdk/spdk_pid469153 00:35:42.160 Removing: /var/run/dpdk/spdk_pid469401 00:35:42.160 Removing: /var/run/dpdk/spdk_pid473587 00:35:42.160 Removing: /var/run/dpdk/spdk_pid479449 00:35:42.160 Removing: /var/run/dpdk/spdk_pid482109 00:35:42.160 Removing: /var/run/dpdk/spdk_pid492322 00:35:42.160 Removing: /var/run/dpdk/spdk_pid501621 00:35:42.160 Removing: /var/run/dpdk/spdk_pid503384 00:35:42.160 Removing: /var/run/dpdk/spdk_pid504288 00:35:42.160 Removing: /var/run/dpdk/spdk_pid521046 00:35:42.160 Removing: /var/run/dpdk/spdk_pid525040 00:35:42.160 Removing: /var/run/dpdk/spdk_pid570540 00:35:42.160 Removing: /var/run/dpdk/spdk_pid575824 00:35:42.160 Removing: /var/run/dpdk/spdk_pid581693 00:35:42.160 Removing: /var/run/dpdk/spdk_pid588042 00:35:42.160 Removing: /var/run/dpdk/spdk_pid588045 00:35:42.160 Removing: /var/run/dpdk/spdk_pid588932 00:35:42.160 Removing: /var/run/dpdk/spdk_pid589819 00:35:42.160 Removing: /var/run/dpdk/spdk_pid590710 00:35:42.160 Removing: /var/run/dpdk/spdk_pid591165 00:35:42.160 Removing: /var/run/dpdk/spdk_pid591186 00:35:42.160 Removing: /var/run/dpdk/spdk_pid591500 00:35:42.160 Removing: /var/run/dpdk/spdk_pid591626 00:35:42.160 Removing: /var/run/dpdk/spdk_pid591632 00:35:42.160 Removing: /var/run/dpdk/spdk_pid592645 00:35:42.160 Removing: /var/run/dpdk/spdk_pid593923 00:35:42.160 Removing: /var/run/dpdk/spdk_pid594818 00:35:42.160 Removing: /var/run/dpdk/spdk_pid595482 00:35:42.160 Removing: /var/run/dpdk/spdk_pid595493 00:35:42.160 Removing: /var/run/dpdk/spdk_pid595717 00:35:42.160 Removing: /var/run/dpdk/spdk_pid596717 00:35:42.160 Removing: /var/run/dpdk/spdk_pid597730 00:35:42.160 Removing: /var/run/dpdk/spdk_pid605910 00:35:42.419 Removing: /var/run/dpdk/spdk_pid634678 00:35:42.419 Removing: /var/run/dpdk/spdk_pid639099 00:35:42.419 Removing: /var/run/dpdk/spdk_pid640656 00:35:42.419 Removing: /var/run/dpdk/spdk_pid642446 00:35:42.419 Removing: /var/run/dpdk/spdk_pid642670 00:35:42.419 Removing: /var/run/dpdk/spdk_pid642694 00:35:42.419 Removing: /var/run/dpdk/spdk_pid642919 00:35:42.419 Removing: /var/run/dpdk/spdk_pid643408 00:35:42.419 Removing: /var/run/dpdk/spdk_pid645194 00:35:42.419 Removing: /var/run/dpdk/spdk_pid645946 00:35:42.419 Removing: /var/run/dpdk/spdk_pid646427 00:35:42.419 Removing: /var/run/dpdk/spdk_pid648559 00:35:42.419 Removing: /var/run/dpdk/spdk_pid648963 00:35:42.419 Removing: /var/run/dpdk/spdk_pid649657 00:35:42.419 Removing: /var/run/dpdk/spdk_pid653845 00:35:42.419 Removing: /var/run/dpdk/spdk_pid659248 00:35:42.419 Removing: /var/run/dpdk/spdk_pid659249 00:35:42.419 Removing: /var/run/dpdk/spdk_pid659250 00:35:42.419 Removing: /var/run/dpdk/spdk_pid663060 00:35:42.419 Removing: /var/run/dpdk/spdk_pid672083 00:35:42.419 Removing: /var/run/dpdk/spdk_pid676168 00:35:42.419 Removing: /var/run/dpdk/spdk_pid682258 00:35:42.419 Removing: /var/run/dpdk/spdk_pid683535 00:35:42.419 Removing: /var/run/dpdk/spdk_pid684829 00:35:42.419 Removing: /var/run/dpdk/spdk_pid686283 00:35:42.419 Removing: /var/run/dpdk/spdk_pid690739 00:35:42.419 Removing: /var/run/dpdk/spdk_pid695213 00:35:42.419 Removing: /var/run/dpdk/spdk_pid699170 00:35:42.419 Removing: /var/run/dpdk/spdk_pid706636 00:35:42.419 Removing: /var/run/dpdk/spdk_pid706699 00:35:42.419 Removing: /var/run/dpdk/spdk_pid711263 00:35:42.419 Removing: /var/run/dpdk/spdk_pid711484 00:35:42.419 Removing: /var/run/dpdk/spdk_pid711711 00:35:42.419 Removing: /var/run/dpdk/spdk_pid712153 00:35:42.419 Removing: /var/run/dpdk/spdk_pid712158 00:35:42.419 Removing: /var/run/dpdk/spdk_pid716560 00:35:42.419 Removing: /var/run/dpdk/spdk_pid717121 00:35:42.419 Removing: /var/run/dpdk/spdk_pid721807 00:35:42.419 Removing: /var/run/dpdk/spdk_pid724576 00:35:42.419 Removing: /var/run/dpdk/spdk_pid729866 00:35:42.419 Removing: /var/run/dpdk/spdk_pid735096 00:35:42.419 Removing: /var/run/dpdk/spdk_pid743682 00:35:42.419 Removing: /var/run/dpdk/spdk_pid750728 00:35:42.419 Removing: /var/run/dpdk/spdk_pid750746 00:35:42.419 Removing: /var/run/dpdk/spdk_pid769708 00:35:42.419 Removing: /var/run/dpdk/spdk_pid770167 00:35:42.419 Removing: /var/run/dpdk/spdk_pid770840 00:35:42.419 Removing: /var/run/dpdk/spdk_pid771300 00:35:42.419 Removing: /var/run/dpdk/spdk_pid772022 00:35:42.419 Removing: /var/run/dpdk/spdk_pid772526 00:35:42.419 Removing: /var/run/dpdk/spdk_pid773154 00:35:42.419 Removing: /var/run/dpdk/spdk_pid773626 00:35:42.419 Removing: /var/run/dpdk/spdk_pid777800 00:35:42.419 Removing: /var/run/dpdk/spdk_pid778025 00:35:42.419 Removing: /var/run/dpdk/spdk_pid783973 00:35:42.419 Removing: /var/run/dpdk/spdk_pid784238 00:35:42.419 Removing: /var/run/dpdk/spdk_pid789558 00:35:42.419 Removing: /var/run/dpdk/spdk_pid793679 00:35:42.419 Removing: /var/run/dpdk/spdk_pid803273 00:35:42.419 Removing: /var/run/dpdk/spdk_pid803941 00:35:42.420 Removing: /var/run/dpdk/spdk_pid808070 00:35:42.420 Removing: /var/run/dpdk/spdk_pid808367 00:35:42.420 Removing: /var/run/dpdk/spdk_pid812641 00:35:42.678 Removing: /var/run/dpdk/spdk_pid818575 00:35:42.678 Removing: /var/run/dpdk/spdk_pid821095 00:35:42.678 Removing: /var/run/dpdk/spdk_pid831019 00:35:42.678 Removing: /var/run/dpdk/spdk_pid839587 00:35:42.678 Removing: /var/run/dpdk/spdk_pid841350 00:35:42.678 Removing: /var/run/dpdk/spdk_pid842248 00:35:42.678 Removing: /var/run/dpdk/spdk_pid858078 00:35:42.678 Removing: /var/run/dpdk/spdk_pid862190 00:35:42.678 Removing: /var/run/dpdk/spdk_pid865067 00:35:42.678 Removing: /var/run/dpdk/spdk_pid872743 00:35:42.678 Removing: /var/run/dpdk/spdk_pid872748 00:35:42.678 Removing: /var/run/dpdk/spdk_pid877728 00:35:42.678 Removing: /var/run/dpdk/spdk_pid879615 00:35:42.678 Removing: /var/run/dpdk/spdk_pid881532 00:35:42.678 Removing: /var/run/dpdk/spdk_pid882678 00:35:42.678 Removing: /var/run/dpdk/spdk_pid884677 00:35:42.678 Removing: /var/run/dpdk/spdk_pid885722 00:35:42.678 Removing: /var/run/dpdk/spdk_pid894494 00:35:42.678 Removing: /var/run/dpdk/spdk_pid894945 00:35:42.678 Removing: /var/run/dpdk/spdk_pid895400 00:35:42.678 Removing: /var/run/dpdk/spdk_pid897823 00:35:42.678 Removing: /var/run/dpdk/spdk_pid898276 00:35:42.678 Removing: /var/run/dpdk/spdk_pid898735 00:35:42.678 Removing: /var/run/dpdk/spdk_pid902612 00:35:42.678 Removing: /var/run/dpdk/spdk_pid902619 00:35:42.678 Removing: /var/run/dpdk/spdk_pid904577 00:35:42.678 Removing: /var/run/dpdk/spdk_pid905027 00:35:42.678 Removing: /var/run/dpdk/spdk_pid905255 00:35:42.678 Clean 00:35:42.678 05:12:33 -- common/autotest_common.sh@1453 -- # return 0 00:35:42.678 05:12:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:42.678 05:12:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:42.678 05:12:33 -- common/autotest_common.sh@10 -- # set +x 00:35:42.678 05:12:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:42.678 05:12:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:42.678 05:12:33 -- common/autotest_common.sh@10 -- # set +x 00:35:42.938 05:12:33 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:42.938 05:12:33 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:42.938 05:12:33 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:42.938 05:12:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:42.938 05:12:33 -- spdk/autotest.sh@398 -- # hostname 00:35:42.938 05:12:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:42.938 geninfo: WARNING: invalid characters removed from testname! 00:36:04.872 05:12:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:06.248 05:12:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:08.150 05:12:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:10.054 05:13:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:11.958 05:13:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:13.860 05:13:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:15.763 05:13:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:15.763 05:13:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:15.763 05:13:06 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:15.763 05:13:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:15.763 05:13:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:15.763 05:13:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:15.763 + [[ -n 351863 ]] 00:36:15.763 + sudo kill 351863 00:36:15.773 [Pipeline] } 00:36:15.788 [Pipeline] // stage 00:36:15.793 [Pipeline] } 00:36:15.806 [Pipeline] // timeout 00:36:15.811 [Pipeline] } 00:36:15.824 [Pipeline] // catchError 00:36:15.828 [Pipeline] } 00:36:15.844 [Pipeline] // wrap 00:36:15.850 [Pipeline] } 00:36:15.862 [Pipeline] // catchError 00:36:15.871 [Pipeline] stage 00:36:15.874 [Pipeline] { (Epilogue) 00:36:15.887 [Pipeline] catchError 00:36:15.888 [Pipeline] { 00:36:15.900 [Pipeline] echo 00:36:15.902 Cleanup processes 00:36:15.907 [Pipeline] sh 00:36:16.192 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:16.192 916121 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:16.205 [Pipeline] sh 00:36:16.488 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:16.488 ++ grep -v 'sudo pgrep' 00:36:16.488 ++ awk '{print $1}' 00:36:16.488 + sudo kill -9 00:36:16.488 + true 00:36:16.499 [Pipeline] sh 00:36:16.782 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:29.001 [Pipeline] sh 00:36:29.285 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:29.285 Artifacts sizes are good 00:36:29.300 [Pipeline] archiveArtifacts 00:36:29.307 Archiving artifacts 00:36:29.428 [Pipeline] sh 00:36:29.814 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:29.828 [Pipeline] cleanWs 00:36:29.837 [WS-CLEANUP] Deleting project workspace... 00:36:29.837 [WS-CLEANUP] Deferred wipeout is used... 00:36:29.843 [WS-CLEANUP] done 00:36:29.845 [Pipeline] } 00:36:29.862 [Pipeline] // catchError 00:36:29.873 [Pipeline] sh 00:36:30.155 + logger -p user.info -t JENKINS-CI 00:36:30.167 [Pipeline] } 00:36:30.181 [Pipeline] // stage 00:36:30.187 [Pipeline] } 00:36:30.201 [Pipeline] // node 00:36:30.207 [Pipeline] End of Pipeline 00:36:30.280 Finished: SUCCESS